Showing posts with label recommendations. Show all posts
Showing posts with label recommendations. Show all posts

Friday, May 7, 2021

How can medical residency candidates be evaluated more reliably?

 Standardized tests as measures of physician aptitude are falling into disrepute and disuse.  Consequently the medical profession needs to develop better ways for evaluators (e.g. med school professors) to communicate information about applicants to residency programs.

Here are two reflections on the current state of afairs in Orthopaedic surgery.

Are Narrative Letters of Recommendation for Medical Students Interpreted as Intended by Orthopaedic Surgery Residency Programs?  by Egan, Cameron R. MD; Dashe, Jesse MD; Hussein, Amira I. PhD; Tornetta, Paul III MD

Clinical Orthopaedics and Related Research: February 25, 2021 - doi: 10.1097/CORR.0000000000001691

"Background: Narrative letters of recommendation are an important component of the residency application process. However, because narrative letters of recommendation are almost always positive, it is unclear whether those reviewing the letters understand the writer’s intended strength of support for a given applicant.

"Questions/purposes: (1) Is the perception of letter readers for narrative letters of recommendation consistent with the intention of the letter’s author? (2) Is there inter-reviewer consistency in selection committee members’ perceptions of the narrative letters of recommendation?

"Methods: Letter writers who wrote two or more narrative letters of recommendation for applicants to one university-based orthopaedic residency program for the 2014 to 2015 application cycle were sent a survey linked to a specific letter of recommendation they authored to assess the intended meaning regarding the strength of an applicant. A total of 247 unstructured letters of recommendation and accompanying surveys were sent to their authors, and 157 surveys were returned and form the basis of this study (response percentage 64%). The seven core members of the admissions committee (of 22 total reviewers) at a university-based residency program were sent a similar survey regarding their perception of the letter.

...

"Conclusion :Our results demonstrate that the reader’s perception of narrative letters of recommendation did not correlate well with the letter writer’s intended meaning and was not consistent between letter readers at a single university-based urban orthopaedic surgery residency program.

"Clinical Relevance: Given the low correlation between the intended strength of the letter writers and the perceived strength of those letters, we believe that other options such as a slider bar or agreed-upon wording as is used in many dean’s letters may be helpful."

**********

CORR Insights®: Are Narrative Letters of Recommendation for Medical Students Interpreted as Intended by Orthopaedic Surgery Residency Programs? by Zywiel, Michael G. MD, MSc, Clinical Orthopaedics and Related Research: April 29, 2021 - doi: 10.1097/CORR.0000000000001780

"With the upcoming transition of the USMLE Step 1 to a pass/fail score, and as we continue to gather more evidence calling into question the current selection criteria used for surgical training, programs are increasingly left to wonder how they can select learners that are most likely to succeed. Similarly, learners are increasingly left wondering how they can appropriately determine whether they are likely to succeed in a chosen specialty.

...

"Going forward, we need more research within the domain of selection criteria for training. This includes identifying more reliable predictors of technical skill, nontechnical skill, as well as performance in independent practice. The failure of most current selection criteria to adequately predict performance suggests that novel, specialty-specific instruments may need to be developed, evaluated, and ultimately incorporated at the medical student level to better predict future performance."

Tuesday, September 22, 2020

Filtering 'inappropriate' content is still a challenge for AI--human monitoring of YouTube

The Financial Times has the story:

YouTube reverts to human moderators in fight against misinformation--Video platform relied on machines after lockdown but found they over-censored borderline content

by Alex Barker and Hannah Murphy 

"Google’s YouTube has reverted to using more human moderators to vet harmful content after the machines it relied on during lockdown proved to be overzealous censors of its video platform.

"When some of YouTube’s 10,000-strong team filtering content were “put offline” by the pandemic, YouTube gave its machine systems greater autonomy to stop users seeing hate speech, violence or other forms of harmful content or misinformation.

"But Neal Mohan, YouTube’s chief product officer, told the Financial Times that one of the results of reducing human oversight was a jump in the number of videos removed, including a significant proportion that broke no rules." 

*****************

Wired Magazine has a good backgrounder on the AI attempt to alter the recommender engine:

YouTube’s Plot to Silence Conspiracy Theories--From flat-earthers to QAnon to Covid quackery, the video giant is awash in misinformation. Can AI keep the lunatic fringe from going viral?

by Clive Thompson

Friday, May 1, 2020

Yelping about restaurant health violations, by Dai and Luca

Online reviews, like Yelp, can play an effective role in disseminating information about restaurant hygiene scores...

Digitizing Disclosure: The Case of Restaurant Hygiene Scores
By Weijia Dai and Michael Luca*
American Economic Journal: Microeconomics 2020, 12(2): 41–59


Abstract: Collaborating with Yelp and the City of San Francisco, we revisit a canonical example of quality disclosure by evaluating and helping to redesign the posting of restaurant hygiene scores on Yelp.com. We implement a two-stage intervention that separately identifies consumer response to information disclosure and a disclosure design with improved salience—a consumer alert. We find score posting is effective, but improving salience further increases consumer response.

They write:

"The rise of online platforms raises the opportunity to revisit disclosure initiatives and to use a design economics lens to improve decisions about where and how disclosure is delivered.
...
"Our intervention proceeds in two stages. In the first stage, which begins in early 2013, Yelp starts posting hygiene scores on San Francisco restaurants. We analyze the impact of the score posting using a difference-in-differences strategy, comparing differential changes in restaurant demand after the score posting between restaurants with low hygiene scores that are more likely to be affected by the score posting (as predefined by the city) and those with higher scores.2  To shed light on demand,
we drew on purchase intention measures, such as a user calling or seeking directions to a restaurant or clicking on the restaurant’s own website link. We find that hygiene score posting on Yelp leads to a 13 percent decrease in purchase intentions for restaurants with low hygiene scores.
...
"Following the initial hygiene score posting intervention, in the second stage, we worked with Yelp in 2015 to develop a “hygiene alert”—a salient message that appears only on the Yelp pages of restaurants with low hygiene scores. Such low-score restaurants are identified by the city as having “poor” operating conditions with “high-risk” hygiene violations.

"We find a further 7 percent decrease in purchase intentions for restaurants with low scores due to hygiene alerts, and an 11 percent decrease in the number of reviews, nearly doubling the effect of the score posting for these businesses. On the restaurant side, although there is no evidence showing an improvement of restaurant hygiene scores across restaurants, we find suggested evidence of a decrease in the probability that the low-score restaurants will receive low scores again after the alert."

Wednesday, April 15, 2020

Coronavirus information, mis-information, conspiracy theories, web search, and social media

The coronavirus / Covid-19 pandemic is constantly generating new information, and misinformation.  How to separate them?

A number of social media sites, and Google, have decided not to rely on their organic recommender systems. So e.g. if you search for "coronavirus" in Google, you get what appear to be all curated sites (e.g. government and University sources, and major newspapers), and if you google "coronavirus conspiracy theories" you get news stories about some of the craziness out there, but not the first hand insanity.

The Guardian has this story:

Tech giants struggle to stem 'infodemic' of false coronavirus claims

"Click over to Google, type in “coronavirus”, and press enter.

"The results you see will bear little resemblance to any other search.

"There are no ads, no product recommendations, and no links to websites that have figured out how to win the search engine optimisation game. Government, NGO and mainstream media sources dominate.

"Algorithms and user-generated content are out; gatekeepers and fact checking are in.
...
"Across the social web – on Facebook, Twitter, YouTube, Reddit, Instagram and Pinterest – search results related to Covid-19 are similarly predetermined.

"Instagram delivers a pop-up urging US users to go to the website for the Centers for Disease Control and Prevention (CDC) – or UK users to the NHS – rather than look at the memes and pictures tagged with #coronavirus.

"On Facebook, a dedicated “Information Center” includes a mix of curated information and official medical advice. On Pinterest, the only infographics and memes to be found on topics such as “Covid-19” or “hydroxychloroquine” are those made by internationally recognised health organisations, such as the WHO.
...
"Another complicating factor is that normally trustworthy sources are not providing reliable information.

“We’ve seen the US government, particularly the White House, becoming a significant purveyor of misinformation around the virus,” Bergstrom said.

Facebook and Twitter have removed posts by prominent and powerful people over coronavirus misinformation, including the Brazilian president, Jair Bolsonaro, but the real test of their resolve will be whether they ever take action against misinformation by Trump."
*********

Here's another story, concerning a particular conspiracy theory:

Facebook acts to halt far-right groups linking Covid-19 to 5G

"Facebook has stepped up efforts to stop the promotion of baseless conspiracy theories linking Covid-19 to 5G, after research highlighted a “toxic cocktail” of far-right-influenced groups pushing the idea alongside incitement to attack telecommunications infrastructure.

"Groups in the UK promoting the conspiracy theory on Facebook – often linking it to explicitly antisemitic messages – have been growing at a significant rate, warned the campaign group Hope not Hate.

"The largest group in the UK, Stop 5G UK, added almost 3,000 members in just 24 hours from 6-7 April while another, Direct Action Against 5G, gained more than 1,400 members in its first week after it was created on 31 March"

Wednesday, March 4, 2020

Can YouTube's recommender engine tone down conspiracy theories?

The NY Times has the story:

Can YouTube Quiet Its Conspiracy Theorists?
A new study examines YouTube’s efforts to limit the spread of conspiracy theories on its site, from videos claiming the end times are near to those questioning climate change. By Jack Nicas

"In January 2019, YouTube said it would limit the spread of videos “that could misinform users in harmful ways.”

"One year later, YouTube recommends conspiracy theories far less than before. But its progress has been uneven and it continues to advance certain types of fabrications, according to a new study from researchers at University of California, Berkeley.

"YouTube’s efforts to curb conspiracy theories pose a major test of Silicon Valley’s ability to combat misinformation, particularly ahead of this year’s elections. The study, which examined eight million recommendations over 15 months, provides one of the clearest pictures yet of that fight, and the mixed findings show how challenging the issue remains for tech companies like Google, Facebook and Twitter."
***********

The paper referred to seems to be this one on Hany Farid's website at Berkeley. It's very recent ("compiled on March 2, 2020," the day of the NY Times story, and e.g. it finds that corona virus conspiracy videos are not being recommended despite being readily available). 

A longitudinal analysis of YouTube’s promotion of conspiracy videos
Marc Faddoul, Guillaume Chaslot, and Hany Farid


Abstract: Conspiracy  theories  have  flourished  on  social  media,  raising  concerns that such content is fueling the spread of disinformation, sup-porting extremist ideologies, and in some cases, leading to violence. Under increased scrutiny and pressure from legislators and the public, YouTube announced efforts to change their recommendation algorithms so that the most egregious conspiracy videos are demoted and demonetized.  To verify this claim, we have developed a classifier for automatically determining if a video is conspiratorial (e.g., the moon landing was faked, the pyramids of Giza were built by aliens, end of the world prophecies, etc.). We coupled this classifier with an emulation of YouTube’s watch-next algorithm on more than a thousand popular informational channels to obtain a year-long picture of the videos actively promoted by YouTube.  We also obtained trends of the so-called filter-bubble effect for conspiracy theories



And here are the concluding paragraphs:


"Summary. The overall reduction of conspiratorial recommendations is an encouraging trend. Nonetheless, this reduction does not make the problem of radicalization on YouTube obsolete nor fictional, as some have claimed (41). Aggregatedata hide very different realities for individuals, and although radicalization is a serious issue, it is only relevant for a fraction of the users. Those with a history of watching conspiratorial content can certainly still experience YouTube as filter-bubble, reinforced by personalized recommendations and channel subscriptions. In general, radicalization is a more complex problem than what an analysis of default recommendations cans cope, for it involves the unique mindset and viewing patternsof a user interacting over time with an opaque multi-layer neural network tasked to pick personalized suggestions from a dynamic and virtually infinite pool of ideas.


"With two billion monthly active users on YouTube, the design of the recommendation algorithm has more impact on the flow of information than the editorial boards of traditional media. The role of this engine is made even more crucial in the light of (1) The increasing use of YouTube as a primary source of information, particularly among the youth (42); (2)The nearly monopolistic position of YouTube on its market;and (3) The ever-growing weaponization of YouTube to spread disinformation and partisan content around the world (43). And yet, the decisions made by the recommendation engine are largely unsupervised and opaque to the public.


"This research is an effort to make the behavior of the algorithm more transparent, in an effort to increase the awarenessof the public and YouTube’s accountability for their statements. We hope it will fuel a public discussion, not about whether YouTube should allow for conspiratorial content on the platform, but about whether such content is appropriate to be part of the baseline recommendations on the informational YouTube.

Monday, January 20, 2020

Recommender systems behaving badly: YouTube and Instagram

Why are readers drawn to sensationalist stories?  Why do content providers produce them?  It likely has something to do with the recommender systems that direct readers' attention to certain stories more than to others.

Time magazine has the YouTube story:

YouTube Has Been 'Actively Promoting' Videos Spreading Climate Denialism, According to New Report

"YouTube has been “actively promoting” videos containing misinformation about climate change, a report released Thursday by campaign group Avaaz claims, despite recent policy changes by the platform intended to drive users away from harmful content and conspiracy theories.
"The “up next” feature dictates what users watch for 70% of the time they spend on YouTube. The exact make-up of the YouTube algorithm that drives recommendations, designed to keep users on the platform for as long as possible, is a closely guarded secret. Experts say the algorithm appears to have learned that radical or outrageous content is more likely to engage viewers.Avaaz examined 5,537 videos retrieved by the search terms “climate change,” global warming” and “climate manipulation,” and then the videos most likely to be suggested next by YouTube’s “up next” sidebar. For each of those search terms respectively, 8%, 16% and 21% of the top 100 related videos included by YouTube in the “up-next” feature contained information that goes against the scientific consensus on climate change – such as denying climate change is taking place, or claiming that human activity is not a cause of climate change. Avaaz claims this promotion process means YouTube is helping to spread climate denialism."
...
"The “up next” feature dictates what users watch for 70% of the time they spend on YouTube. The exact make-up of the YouTube algorithm that drives recommendations, designed to keep users on the platform for as long as possible, is a closely guarded secret. Experts say the algorithm appears to have learned that radical or outrageous content is more likely to engage viewers.

**********
The NY Times has the Instagram story
This Is the Guy Who’s Taking Away the Likes
"Likes are the social media currency undergirding an entire influencer economy, inspiring a million Kardashian wannabes and giving many of us regular people daily endorphin hits. But lately, Mr. Mosseri has been concerned about the unanticipated consequences of Instagram as approval arbiter.
...
"Mr. Mosseri knows something about dealing with dystopian tech fallout. He came to Instagram in October 2018 after years overseeing the Facebook News Feed, an unwitting engine of fake news, inflammatory rhetoric and disinformation. He wants to avoid similar pitfalls at Instagram, which is owned by Facebook.