Showing posts with label fraud. Show all posts
Showing posts with label fraud. Show all posts

Monday, May 20, 2024

The labor market for OnlyFans chatters

 Here's a story by a professional writer and journalist, who appears to be a middle-aged dad, about his efforts to find and then master a job impersonating a 20-something female sex performer chatting with her fans on the website OnlyFans.

Wired has the story:

.I Went Undercover as a Secret OnlyFans Chatter. It Wasn’t Pretty. Your online influencer girlfriend is actually a rotating cast of low-wage workers. I became one of them. by BRENDAN I. KOERNER

"Like many of OnlyFans’ top earners, she had hired a management agency to help keep up with her customers’ demands for personal attention. “The chat specialists they give you, that was a huge deal for me,” she said. The agency provided a team of contractors whose sole job is to masquerade as the creator while swapping DMs with her subscribers. These textual conversations are meant to be the main way that OnlyFans users can interact with the models they adore.

"The existence of professional OnlyFans chatters wouldn’t have surprised me so much if I’d given just a few moments’ thought to the mathematical realities of the platform. OnlyFans has thrived by promising its reported 190 million users that they can have direct access to an estimated 2.1 million creators. It’s impossible for even a modestly popular creator to cope with the avalanche of messages they receive each day. The $5.6 billion industry has solved this logistical conundrum by entrusting its chat duties to a hidden proletariat, a mass of freelancers who sustain the illusion that OnlyFans’ creators are always eager to engage—sexually and otherwise—with paying customers.

...

"Gradually I realized that my best shot at understanding how chatters operate would be to join their ranks. As an English major who’s been fortunate enough to make a living with words for more than 20 years, I naively assumed I was qualified to land a gig. And as a writer, I was curious to learn what kind of artistry the job would require—what it takes to ensure that OnlyFans users never doubt they’re really interacting with the objects of their desire.

"AS I EMBARKED on my job hunt, I asked the owner of a top-tier OnlyFans agency for tips on how to make myself an appealing candidate. He was pessimistic about my odds of getting hired, mainly because I’m American. He said agencies tend to favor contractors who reside in lower-wage countries. That insight was borne out as I poked around the online communities where chatters find help-wanted ads; though the vast majority of OnlyFans users live in the US, the bulk of my competitors were based in places like the Philippines and Venezuela. Judging by their posts on the r/OnlyFansChatter subreddit and in an invite-only Facebook group, these workers are relatively well-educated, with university-level English and ace typing skills that some developed in high-pressure call centers. They also put up with all manner of abuses: OnlyFans agencies are notorious for stiffing their freelancers, forcing them to work 70-hour weeks, and summarily firing them if they miss a shift due to a power outage."


Saturday, March 30, 2024

Fraud in physics? Room temp superconductors, again

 It should come as no surprise that it's not only social sciences that can be roiled by accusations of research misconduct.

Here's a story in Nature about a scientist who had a paper retracted from Nature, and then had another accepted, and then also retracted, both about room temperature superconductors.  It's a long, detailed story, but it says something about both science and about peer review.

Superconductivity scandal: the inside story of deception in a rising star’s physics lab. Ranga Dias claimed to have discovered the first room-temperature superconductors, but the work was later retracted. An investigation by Nature’s news team reveals new details about what happened — and how institutions missed red flags.   By Dan Garisto

"A researcher at the University of Rochester in New York, Dias achieved widespread recognition for his claim to have discovered the first room-temperature superconductor, a material that conducts electricity without resistance at ambient temperatures. Dias published that finding in a landmark Nature paper1.

"Nearly two years later, that paper was retracted. But not long after, Dias announced an even bigger result, also published in Nature: another room-temperature superconductor2.

...

" Nature has since retracted his second paper2 and many other research groups have tried and failed to replicate Dias’s superconductivity results. ...The scandal “has damaged careers of young scientists...

...

"Three previous investigations ... by the University of Rochester did not find evidence of misconduct. But last summer, the university launched a fourth investigation,... That fourth investigation is now complete and, according to a university spokesperson, the external experts confirmed that there were “data reliability concerns” 

...

"Nature retracted the CSH paper on 26 September 2022, with a notice that states “issues undermine confidence in the published magnetic susceptibility data as a whole, and we are accordingly retracting the paper”.

...

"Felicitas Heβelmann, a specialist in retractions at the Humboldt University of Berlin, says misconduct is difficult to prove, so journals often avoid laying blame on authors in retractions. “A lot of retractions use very vague language,” she says.

...

"The lack of industry-wide standards for investigating misconduct leaves it unclear whether the responsibility to investigate lands more on journals or on institutions.

...

"After Nature published the LuH paper in March 2023, many scientists were critical of the journal’s decision, given the rumours of misconduct surrounding the retracted CSH paper.

...

"All four referees agreed that the findings, if true, were highly significant. But they emphasized caution in accepting the manuscript, because of the extraordinary nature of the claims. Referee 4 wrote that the journal should be careful with such extraordinary claims to avoid another “Schön affair”, referring to the extensive data fabrication by German physicist Jan Hendrik Schön, which has become a cautionary tale in physics and led to dozens of papers being retracted, seven of them in Nature. Referees 2 and 3 also expressed concern about the results because of the CSH paper, which at the time bore an editor’s note of concern but had not yet been retracted. 

...

"When asked why Nature considered Dias’s LuH paper after being warned of potential misconduct on the previous paper, Magdalena Skipper, Nature’s editor-in-chief, said: “Our editorial policy considers every submission in its own right.” The rationale, Skipper explains, is that decisions should be made on the basis of the scientific quality, not who the authors are."

Friday, January 19, 2024

Incentives and mis-incentives in science (Freakonomics part II)

 Freakonomics has a second post on fraud in science, and you can listen or read the transcript here:

Can Academic Fraud Be Stopped?

Two quotes stood out for me:

1. VAZIRE: Oh, I don’t mind being wrong. I think journals should publish things that turn out to be wrong. It would be a bad thing to approach journal editing by saying we’re only going to publish true things or things that we’re 100 percent sure are true. The important thing is that the things that are more likely to be wrong are presented in a more uncertain way. And sometimes we’ll make mistakes even there. Sometimes we’ll present things with certainty that we shouldn’t have. What I would like to be involved in and what I plan to do is to encourage more post-publication critique and correction, reward the whistleblowers who identify errors that are valid and that need to be acted upon, and create more incentives for people to do that, and do that well.

...

2. BAZERMAN: Undoubtedly, I was naive. You know, not only did I trust my colleagues on the signing-first paper, but I think I’ve trusted my colleagues for decades, and hopefully with a good basis for trusting them. I do want to highlight that there are so many benefits of trust. So, the world has done a lot better because we trust science. And the fact that there’s an occasional scientist who we shouldn’t trust should not keep us from gaining the benefit that science creates. And so one of the harms created by the fraudsters is that they give credibility to the science-deniers who are so often keeping us from making progress in society.


############

Earlier:

Sunday, January 14, 2024

Sunday, January 14, 2024

"Why Is There So Much Fraud in Academia?" Freakonomics interviews Max Bazerman and others

Below is the latest Freakonomics podcast (and transcript), on fraud in academia.  Those most in the headlines weren't available to be interviewed, but their coauthor (and my longtime HBS colleague) Max Bazerman gives his perspective.

Also interviewed are the Data Colada authors/data sleuths Leif Nelson Uri Simonsohn, and Joe Simmons (with some clues about the name of their blog), and Brian Nosek, who founded the prizewinning Center for Open Science (https://www.cos.io/ 

Here it is:

Why Is There So Much Fraud in Academia?  Some of the biggest names in behavioral science stand accused of faking their results. Freakonomics EPISODE 572.

######

And here are two paragraphs from Max's HBS web page (linked above), suggesting more to come:

"I have been connected to one of the most salient episodes of data fabrication in the history of social science – involving the signing first effect alluded to above. I am working on understanding all known social science frauds in this millennium. Social science also struggles with a broader problem, namely the fact that many studies fail to replicate due to faulty research practices that have become common in social science. Most replication failures can be traced back to the original researchers twisting their data to conform to their predictions, rather than from outright fraud. Trying to produce “significant” results, they may run a study multiple times, in a variety of ways, then selectively report the tests that worked and fail to report those that didn’t. The result is the publication of conclusions that do not hold up as accurate. Both problems – outright data fabrication and this reporting bias that shapes results – need to be tackled, so all of us in academia can publish results that are replicable and can help create value in society.

         "The last dozen years have witnessed multiple efforts to reform social science research to make it more credible, reproducible, and trusted. I am writing a book on reforming social science, which will provide an account of recent data fabrications, and highlight strategies to move forward to create more credible and impactful scientific research."

Friday, September 15, 2023

Regulating research and research misconduct

 Peer review is a clunky process that often misfires, but it helps to keep up the quality of published research. Much the same can be said about procedures for investigating allegations of research misconduct: they help clean up the research record, but they can also harm the innocent (particularly when accusations may be weaponized against competitors).*

Nature has the story of an investigator whose research was put on hold for four years before the accusations against him were dismissed after a lengthy investigtion:

‘Gagged and blindsided’: how an allegation of research misconduct affected our lab. Bioengineer Ram Sasisekharan describes the impact of a four-year investigation by the Massachusetts Institute of Technology, which ultimately cleared him.  by Anne Gulland

"In May 2019, a phone call to Ram Sasisekharan from a reporter at The Wall Street Journal triggered a chain of events that stalled the bioengineer’s research, decimated his laboratory group and, he says, left him unable to help find treatments for emerging infectious diseases during a global pandemic.

"The journalist had rung Sasisekharan, who works at the Massachusetts Institute of Technology (MIT) in Cambridge, for his comment on an article in the journal mAbs that had been published a few days previously1. The article alleged that Sasisekharan and his co-authors had “an intent to mislead as to the level of originality and significance of the published work”.

...

"At first, Sasisekharan assumed this was a storm he could weather by providing scientific evidence to refute the allegation, which related to two papers he had published with collaborators, in the Proceedings of the National Academy of Sciences (PNAS)2 and Cell Host & Microbe3. But then, MIT received a formal complaint of research misconduct against Sasisekharan, triggering an internal investigation that took more than three years and only concluded this March, when he was exonerated.

...

"Although the accusation had a huge impact on him in terms of his reputation, it was even harder for his staff, he says. “A lab is like a family — you have undergraduate and graduate students, as well as postdocs. The culture of a group and how we communicate is what makes it vibrant, and it was terrible to see how the lab suffered as a consequence of these very public allegations.” He adds: “You get really isolated, you stop being invited to things. There was this dark cloud hanging over us because we just couldn’t talk about it openly or defend ourselves.”

#######

*the postscript of this previous post comes to mind.

Monday, August 30, 2021

Symposium on Research Integrity: Replicability and outright malfeasance. November 24 in Berlin

 Research integrity has been in the news lately, concerning low levels of replicability in some kinds of research, together with more intentional (but probably/hopefully less widespread) problems.  Here's a forthcoming symposium:

Symposium on Integrity in Research

Place and time: hybrid event on November 24, 2021.

"Symposium topic: Research Integrity is a controversial topic within academia, but also in public discourse. Prominent cases of scientific misconduct capture the limelight, but recently a multitude of issues beyond the classical triad of plagiarism, falsification, and fabrication have taken center stage, many of which concern the quality and rigor of research and further challenge the trustworthiness of science. These include selective publication and file-drawer problems, various biases, as well as the general reproducibility or robustness of research results. At the same time, open, inclusive, and creative cultures of research in teams and organizations have been threatened by practices that prioritize outputs, as demonstrated by numerous examples of insufficient mentoring, unfair authorship practices, or intransparency about career progress for younger researchers. All these issues have long histories of discussions about improving scientific methods, however, views differ on how important these issues are, and whether all disciplines are affected equally. While some argue that there is only one scientific method, requiring universal standards for robust evidence, others emphasize the diversity of research cultures and the mutual criticism and learning that can result from this diversity.

"As scientific expertise becomes more important in and for the public, it becomes apparent that scientific findings are often provisional, subject to correction, and scientific experts may disagree. There are no simple either-or answers. While it seems indisputable that scientific evidence should be subject to the highest possible standards and be appropriate to the context, it is nevertheless necessary that these standards evolve as scientific methods and research questions progress. For urgent societal problems - such as pandemics - we may even be willing to lower these standards. For new problems, appropriate standards will only emerge after much experimentation and debate. The need for such constant debate is familiar to scientists, but can be disconcerting to the public. Standardization - both in the sense of setting standards and in the sense of homogenization - of research can therefore run the risk of undermining, rather than securing, the progress of knowledge. As a result, the integrity of research must remain a topic for debate, as it is expected to ensure both the robustness and innovation of research while meeting the expectations of different research cultures and the public.

"As a contested topic, research integrity encompasses a wide range of actors, platforms and organizations, policies and measures. The symposium will bring together participants from research, practice and policy to map this heterogeneous field, provide evidence of its effectiveness and analyze its (future) development.


"Keynotes and formats

"The symposium is organized as a hybrid event and will include keynotes by Professor Lorraine Daston (Max Planck Institute for the History of Science Berlin / University of Chicago) and Professor Dava J. Newman (Massachusetts Institute of Technology / MIT Media Lab).

"Other international and national experts will be invited together with the event's guests to discuss and reflect on the current state of research integrity and its future development. Interactive and inclusive formats connecting on-site participants and digital guests worldwide will ensure a sustained exchange on a topic of central importance for the future of science.

The symposium is organized in concurrence with the Einstein Foundation Berlin’s “Einstein Award for Promoting Quality in Research” that will take place later the same evening.

Registration, access and program

Participation in the symposium is free of charge. Further details on registration, access and the program of the event will be published on this page soon.

Contact  Nele Albrecht, Scientific Coordinator for Research Quality  Email: core@berlin-university-alliance.de 

Tuesday, January 7, 2020

Fake rhino horn

When is it ok to fight one repugnant transaction with another?  The New York Times has a story of fighting the (repugnant) sales of rhinoceros  horn by flooding the market with fake rhinoceros horn, i.e. fighting the species-endangering trafficking of rhino horn by selling fakes.  Of course the success of such a strategy for reducing poaching depends on whether fakes are a substitute or a complement for the real thing--e.g. it will fail if the fakes increase the size of the market in ways that increases poaching, rather than satisfying the demand more cheaply (or if fear of convincing fakes reduces demand...)

Scientists Created Fake Rhino Horn. But Should We Use It?
Experts are divided over whether flooding the Asian market with convincing artificial rhino horn would help or hurt rhinos’ survival.  By Rachel Nuwer

"In Africa, 892 rhinos were poached for their horns in 2018, down from a high of 1,349 killed in 2015. The decline in deaths is encouraging, but conservationists agree that poaching still poses a dire threat to Africa’s rhino population, which hovers around 24,500 animals.
Now, in the hopes of driving down the value of rhino horn and reducing poaching even more, scientists have created a convincing artificial rhino horn made from horsehair.
... 
Dr. Vollrath believes his artificial horn could be used to covertly flood the market with a cheap, convincing replacement, reducing the demand that leads to rhinos being slaughtered. He also hopes it might provide an educational tool for “demystifying that rhino horn’s something very special,” he said.
...
"Critics say that fake rhino horn risks stimulating demand for real horn, and that it would complicate policing. “There’s already scarce resources for wildlife crime and we don’t want to make it even more difficult for law enforcement,” said Ms. Swaak-Goldman, who works with governments and law enforcement agencies.
Peter Knights, chief executive officer of WildAid, a nonprofit organization dedicated to ending illegal wildlife trade, added that the market in Vietnam is already flooded with convincing fakes, like water buffalo horn, which accounts for up to 90 percent of what’s sold as rhino horn. “It’s widely known that there is a lot of fake product out there, so this experiment is already running,” Mr. Knights said."
*************
See also

The Economics of Synthetic Rhino Horns

32 Pages Posted: 24 Aug 2016 Last revised: 10 Aug 2017

Frederick Chen

Wake Forest University
Date Written: June 1, 2017

Abstract

To examine the potential impact of synthetic horns to reduce rhino poaching, a formal model of the rhino horn market in which there exist firms with the capability to produce high quality synthetic horns is presented and studied. The analysis shows that whether the availability of synthetic horns would decrease the equilibrium supply of wild horns -- and how much the reduction would be -- depends on market structure -- i.e., how competitive the synthetic horn production sector is -- and on how substitutable the synthetic horns are for wild horns. The implications of these results for conservation policies are derived and discussed. Synthetic horn producers would benefit more by promoting their products as being superior to wild horns, but this could increase horn prices and lead to more rhino poaching. For conservation purposes, it may be beneficial to incentivize firms to produce inferior fakes -- synthetic horns that are engineered to be undesirable in some respect but difficult for buyers to distinguish from wild horns. The analysis also shows that promoting competition in the production of synthetic horns in general is desirable from a conservation standpoint as synthetic horn producers may prefer to keep prices at a high enough level that could still encourage significant amount of poaching.

Thursday, June 6, 2019

Someone is impersonating me on LinkedIn

An alert MIT grad student received a LinkedIn request that appeared to be from me, except not quite, and was kind enough to let me know. (In fact I don't have a LinkedIn account...)

There is someone on LinkedIn, it turns out, pretending to be me--same name, same jobs at Harvard and Stanford, same 2012 Nobel prize. Here's his profile:
 https://www.linkedin.com/in/alvin-roth-a37995186/

If you have a LinkedIn account, you can go to that profile, click on the three dot link, and get to a link called Report, one of whose options is to 'report an impersonator'.  (You can't send any explanatory text, so maybe getting many such reports will prompt LinkedIn to do something, where one or two haven't yet done the trick...)

thanks,
al

p.s. Here's the LinkedIn help page for reporting fake profiles (if you have a LinkedIn account).
***************
Update: LinkedIn followed up pretty quickly, and apparently nuked the fake account:

Response (06/06/2019 23:02 CST)

Hi Alvin,

I hope this email finds you well and that you are having a fantastic day, I'm Heisenberg from the Safety Operations Team and I'll be happy to assist you!

Thanks for informing us of this situation.

It is against the terms of LinkedIn's User Agreement and Professional Community Policies to impersonate another person on the website. We'll take the appropriate action based on the results of our investigation.

Thanks for your assistance in making LinkedIn a professional and trustworthy site.

Regards,

Heisenberg
LinkedIn Safety Operations Support Specialist 

Tuesday, August 11, 2015

Conference housing pirates--the (criminal) market for hotel rooms

Here's a scam I hadn't encountered before.

I will be speaking at a transplant conference in February, and last week my phone rang and someone asked me if I had already made my hotel reservations, and offered to make them for me. I declined, and emailed the conference organizer asking if this was how housing was being arranged. In reply I got the following (slightly redacted) email, addressed to all the speakers....

"Dear ... Faculty,

I have received word from two speakers who advised me that they were contacted by a company called Expo Housing. (They can go by other names too) xxx told me she was contacted by a xxx who left an 866 call back number.

This company ...has NOT been contracted to organize, sell or arrange housing for anyone attending or speaking at the [conference] taking place in February 2016 ....

Please DO NOT BOOK housing with anyone. As a speaker you will receive a travel and housing survey from me or another member of the  staff located in the ... National Office. Please contact me immediately if you are contacted by anyone trying to book your housing. 

Our housing website is under construction at this time but again, as a speaker your housing will be arranged by  staff.

Housing pirates or hijackers are illegal entities who "sell" hotel rooms. These rooms can exist or not exist. Often times your money is lost. Typically these people target large meetings like the American Transplant Congress, but no meeting is safe. Any rooms booked through a pirate are not guaranteed by the group nor will they be included in the [conference] block of rooms. "
  

Saturday, July 11, 2015

Cheating in China on (American) college admissions

Inside Higher Ed has the story: In China, No Choice But to Cheat?
July 9, 2015 By
"EUGENE, Ore. -- Is the admission process broken for Chinese applicants to American colleges?
Variations of that question came up again and again during sessions on Wednesday at the Overseas Association for College Admission Counseling [OACAC] conference. Persistent concerns about standardized test fraud, doctored transcripts and fake admission letters -- and the role of agents in helping to "pollute" the application process (as one session description put it) -- are causing some to worry that Chinese students might think cheating is their only choice.
"We need to make it [the application process] safe for honest applicants," said Terry Crawford, the chief executive officer and co-founder of InitialView, a video interviewing company based in Beijing.
"There's a perception in China that the system is rigged, that if you pay enough money you're going to get the results that you want," Crawford said. He cited a recent China Newsweek article laying out the process and prices for cheating on the Test of English as a Foreign Language (TOEFL) as just one example of the type of story that feeds into this perception (the reporter received test answers during the exam via a small, wireless-enabled watch)."
**************

Interestingly, Initial View, the company that Terry Crawford and Gloria Chyou founded in China, was initially founded to address the problem of fraud in English language tests, by offering applicants the opportunity to make a video of an unscripted interview that they conduct, to be sent to colleges, who can then confirm the fluency of the speaker, and later verify that the student who enrolls is the one who took the interview.

Tuesday, September 10, 2013

Citation collaboration

"Impact factors" have become important to journals, and efforts to manipulate them come to light from time to time. Here's a Brazilian story from the journal Nature: Brazilian citation scheme outed
Thomson Reuters suspends journals from its rankings for ‘citation stacking’.

"Mauricio Rocha-e-Silva thought that he had spotted an easy way to raise the profiles of Brazilian journals. From 2009, he and several other editors published articles containing hundreds of references to papers in each others’ journals — in order, he says, to elevate the journals’ impact factors.

"Because each article avoided citing papers published by its own journal, the agreement flew under the radar of analyses that spot extremes in self-citation — until 19 June, when the pattern was discovered. Thomson Reuters, the firm that calculates and publishes the impact factor, revealed that it had designed a program to spot concentrated bursts of citations from one journal to another, a practice that it has dubbed ‘citation stacking’. Four Brazilian journals were among 14 to have their impact factors suspended for a year for such stacking. And in July, Rocha-e-Silva was fired from his position as editor of one of them, the journal Clinics, based in São Paulo."

Monday, September 9, 2013

Are behavioral results more likely to be exaggerated than biological results?

That's the claim (reported in the blog Retraction Watch) of
"a new paper in theProceedings of the National Academy of Sciences (PNAS) suggests that it’s behavioral science researchers in the U.S. who are more likely to exaggerate or cherry-pick their findings.
1,174 primary outcomes appearing in 82 metaanalyses published in health-related biological and behavioral research sampled from the Web of Science categories Genetics & Heredity and Psychiatry and measured how individual results deviated from the overall summary effect size within their respective meta-analysis.
And while studies
whose outcome included behavioral parameters were generally more likely to report extreme effects, and those with a corresponding author based in the US were more likely to deviate in the direction predicted by their experimental hypotheses, particularly when their outcome did not include additional biological parameters.
But they didn’t find the same to be true for non-behavioral studies.
Although this latter finding could be interpreted as a publication bias against non-US authors, the US effect observed in behavioral research is unlikely to be generated by editorial biases. Behavioral studies have lower methodological consensus and higher noise, making US researchers potentially more likely to express an underlying propensity to report strong and significant findings.
So where might this predisposition come from, ask the authors?
A complete explanation would probably invoke a combination of cultural, economic, psychological, and historical factors, which at this stage are largely speculative. Our preferred hypothesis is derived from the fact that researchers in the United States have been exposed for a longer time than those in other countries to an unfortunate combination of pressures to publish and winner-takes-all system of rewards (20, 22). This condition is believed to push researchers into either producing many results and then only publishing the most impressive ones, or to make the best of what they got by making them seem as important as possible, through post hoc analyses, rehypothesizing, and other more or less questionable practices (e.g., 10, 13, 22, 26). Such a pattern of modulating forces may gradually become more prevalent also in other countries currently and in the near future (18, 20, 21)."
...
"And Fanelli was also quick to point out that this kind of exaggeration doesn’t seem to be exclusive to the U.S.
The US are an ideal subject because they are relatively homogeneous and yet very big and scientifically productive, so it was easy for us to compare the US to the rest of the world. And of course the US-effect was especially interesting, since it helped us exclude classic explanations, such as editorial biases and simple file-drawer effects. But we suspect that with higher statistical power we would observe specific biases in other countries, in Europe and elsewhere, possibly limited to specific fields and periods in time.
Before opening the floor to what we hope will be a robust discussion, we’ll close with lovely description of science that opens the paper:
Science is a struggle for truth against methodological, psychological, and sociological obstacles."

Thursday, April 18, 2013

Someone was pretending to be me on Google+ (a story with a happy ending)

The internet is home to a variety of scams, and some of them involve trying to manipulate Google search results. So I was surprised and dismayed, but not entirely shocked, when I noticed that someone was pretending to be me, by establishing a Google+ page for a company called Market Design, whose web page was....this blog.

What could be in it for them?  Well, maybe they were really trying to pretend that by hiring them you were hiring me. But maybe they just were moving up in Google searches through the links that this blog gets.  Here's what you saw if you searched for "market design" on Google:


The first result, on the left, is my blog. But there's an item under it called "Google+ page" and an address, which both link to the spoofer, who is also on the right, with a phone number and a map, and a picture that if you click on it gets you to one of my blog posts.

If you clicked on the link that says Google+ page under the link to my blog, you got to this page, on which my blog URL was clearly displayed as the company web page:



On 4/7/13 I filled out a problem report on the Google+ profile page, and I wrote a review disclaiming any connection between their site and me or my blog...

Apparently Google pays attention to this kind of complaint. When I checked back on 4/9/13 the spoofer was already nowhere to be found.

A recent email confirmed this:

Sent: Tuesday, April 16, 2013 1:09 PM
To: Roth, Alvin
Subject: Google Maps Problem Report - Action taken

Wednesday, January 4, 2012

Scientific misconduct: fraud, plagiarism and all that

A good article on scientific fraud and plagiarism by Charles Gross in The Nation (of all places), focusing on the case of Marc Hauser, but looking at the phenomenon much more widely: Disgrace: On Marc Hauser

"The first formal discussion of scientific misconduct was published in 1830 by Charles Babbage, who held Newton’s chair at Cambridge and made major contributions to astronomy, mathematics and the development of computers. In Reflections on the Decline of Science in England and on Some of Its Causes, Babbage distinguished “several species of impositions that have been practised in science…hoaxing, forging, trimming, and cooking.” An example of “hoaxing” would be the Piltdown man, discovered in 1911 and discredited in 1953; parts of an ape and human skull were combined, supposedly to represent a “missing link” in human evolution. Hoaxes are intended to expose naïveté and credulousness and to mock pseudo wisdom. Unlike most hoaxes, Babbage’s other “impositions” are carried out to advance the perpetrator’s scientific career. “Forging,” which he thought rare, is the counterfeiting of results, today called fabrication. “Trimming” consists of eliminating outliers to make results look more accurate, while keeping the average the same. “Cooking” is the selection of data. Trimming and cooking fall under the modern rubric of “falsification.” Scholarly conventions and standards of scientific probity were probably different in the distant past, yet the feuds, priority disputes and porous notions of scientific truthfulness from previous centuries seem contemporary.
...
"Scientists guilty of misconduct are found in every field, at every kind of research institution and with a variety of social and educational backgrounds. Yet a survey of the excellent coverage of fraud in Science and recent books on the subject—ranging from Horace Freeland Judson’s The Great Betrayal: Fraud in Science (2004) to David Goodstein’s On Fact and Fraud: Cautionary Tales From the Front Lines of Science (2010)—reveals a pattern of the most common, or modal, scientific miscreant. He is a bright and ambitious young man working in an elite institution in a rapidly moving and highly competitive branch of modern biology or medicine, where results have important theoretical, clinical or financial implications. He has been mentored and supported by a senior and respected establishment figure who is often the co-author of many of his papers but may have not been closely involved in the research.
...
"The serious involvement of the government in policing scientific misconduct began only in 1981, when hearings were convened by Al Gore, then a Congressman and chair of the investigations and oversight subcommittee of the House Science and Technology Committee, after an outbreak of egregious scandals. One was the case of John Long, a promising associate professor at Massachusetts General Hospital who was found to have faked cell lines in his research on Hodgkin’s disease. Another case involved Vijay Soman, an assistant professor at Yale Medical School. Soman plagiarized the research findings of Helena Wachslicht-Rodbard, who worked at the NIH. A paper Wachslicht-Rodbard had written about anorexia nervosa and insulin receptors had been sent for publication review to Soman’s mentor, Philip Felig, the vice chair of medicine at Yale. Felig gave it to Soman, who ghostwrote a rejection for Felig. Soman then stole the idea of Wachslicht-Rodbard’s paper and some of its words, fabricated his own supporting “data” and published his results with Felig as co-author.
...
"the section on Plagiarism in the Publication Manual of the American Psychological Association says, ‘The key element of this principle is that an author does not present the work of another author as if it were his own. This can extend to ideas as well as written words.

Monday, August 15, 2011

The winner's curse: auction activist receives prison term

In 2008 I blogged about an environmental activist who disrupted an oil and gas auction by submitting some winning bids. Now he's been sentenced to prison:

Environmental Activist Tim DeChristopher Sentenced to Prison, Tells the Court, "This Is What Hope Looks Like"

The subheadline is "In these times of a morally bankrupt government that has sold out its principles, this is what patriotism looks like." Read his words to the court."

They are worth reading.

HT: Yün-ke Chin-Lee 

Sunday, November 21, 2010

Strategy-proofness and strategy sets: residency fraud in school choice

When we speak of strategy-proofness in the context of school choice, we are most often speaking about whether it is safe for parents to reveal their true preferences when asked to submit a rank ordering of possible school assignments. Of course, parents have other private information as well, and they may have incentives to misprepresent that also.

I'm reminded of this by the fact that the San Francisco Unified School District has recently sent a letter to the address of record to each student regarding an Amnesty Period for Residency Fraud.
(It includes the line "This letter is directed to families that have committed residency fraud. Parents/Guardians who have never submitted false residency information to the District may disregard this letter.")

Saturday, November 20, 2010

Research misconduct in the marketplace for science

Lately there have been a number of news stories about research misconduct of the "conventional" sort, involving scientists falsifying the scientific record by fabricating or misrepresenting data. When federal granting agencies find that someone has committed this kind of fraud they have procedures for denying them future grants, etc.

Here's a story about a kind of egregious scientific misconduct that is not covered by such procedures, although the criminal law addresses a small part of it. The story is about a scientist who sabotoged another scientist's experiments: Research integrity: Sabotage!

"Bhrigu, over the course of several months at Michigan, had meticulously and systematically sabotaged the work of Heather Ames, a graduate student in his lab, by tampering with her experiments and poisoning her cell-culture media. Captured on hidden camera, Bhrigu confessed to university police in April and pleaded guilty to malicious destruction of personal property, a misdemeanour that apparently usually involves cars: in the spaces for make and model on the police report, the arresting officer wrote "lab research" and "cells".
...
"Bhrigu's actions are surprising, but probably not unique. There are few firm numbers showing the prevalence of research sabotage, but conversations with graduate students, postdocs and research-misconduct experts suggest that such misdeeds occur elsewhere, and that most go unreported or unpoliced. In this case, the episode set back research, wasted potentially tens of thousands of dollars and terrorized a young student. More broadly, acts such as Bhrigu's — along with more subtle actions to hold back or derail colleagues' work — have a toxic effect on science and scientists. They are an affront to the implicit trust between scientists that is necessary for research endeavours to exist and thrive.


"Despite all this, there is little to prevent perpetrators re-entering science. In the United States, federal bodies that provide research funding have limited ability and inclination to take action in sabotage cases because they aren't interpreted as fitting the federal definition of research misconduct, which is limited to plagiarism, fabrication and falsification of research data. In Bhrigu's case, administrators at the University of Michigan worked with police to investigate, thanks in part to the persistence of Ames and her supervisor, Theo Ross.
...
"At Washtenaw County Courthouse in July, having reviewed the case files, Pollard Hines delivered Bhrigu's sentence. She ordered him to pay around US$8,800 for reagents and experimental materials, plus $600 in court fees and fines — and to serve six months' probation, perform 40 hours of community service and undergo a psychiatric evaluation.


"But the threat of a worse sentence hung over Bhrigu's head. At the request of the prosecutor, Ross had prepared a more detailed list of damages, including Bhrigu's entire salary, half of Ames's, six months' salary for a technician to help Ames get back up to speed, and a quarter of the lab's reagents. The court arrived at a possible figure of $72,000, with the final amount to be decided upon at a restitution hearing in September.

"Before that hearing could take place, however, Bhrigu and his wife left the country for India...
"Now that Bhrigu is in India, there is little to prevent him from getting back into science. And even if he were in the United States, there wouldn't be much to stop him. The National Institutes of Health in Bethesda, Maryland, through its Office of Research Integrity, will sometimes bar an individual from receiving federal research funds for a time if they are found guilty of misconduct. But Bhigru probably won't face that prospect because his actions don't fit the federal definition of misconduct, a situation Ross finds strange. "All scientists will tell you that it's scientific misconduct because it's tampering with data," she says."

HT: Muriel Niederle

Monday, November 8, 2010

Incentives and erroneous research

The Atlantic runs a profile of Dr. John Ioannidis, who studies bad science. He focuses on medicine, where the incentives are perhaps highest (and in which the costs of bad science might be greatest). But the issues he discusses concern all scientists: Lies, Damned Lies, and Medical Science

"He chose to publish one paper, fittingly, in the online journal PLoS Medicine, which is committed to running any methodologically sound article without regard to how “interesting” the results may be. In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process—in which journals ask researchers to help decide which studies to publish—to suppress opposing views. “You can question some of the details of John’s calculations, but it’s hard to argue that the essential ideas aren’t absolutely correct,” says Doug Altman, an Oxford University researcher who directs the Centre for Statistics in Medicine.

"Still, Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim. He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association. "

The article ends on a philosophical note:

"We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.

“Science is a noble endeavor, but it’s also a low-yield endeavor,” he says. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”

Friday, September 18, 2009

Cent mail: signalling that your email isn't spam

Here's a new twist (from Yahoo! Research) on paying to send email as a barrier to spam: a 1 cent donation to charity for each email buys you an encrypted stamp that assures the recipient that you paid: Pay-per-email plan to beat spam and help charity.

"Yahoo! Research's CentMail resurrects an old idea: that levying a charge on every email sent would instantly make spamming uneconomic. But because the cent paid for an accredited "stamp" to appear on each email goes to charity, CentMail's inventors think it will be more successful than previous approaches to make email cost. They think the cost to users is offset by the good feeling of giving to charity."

"Some previous schemes, such as Goodmail, simply pocketed the charge for the virtual stamps. Another deterred spammers by forcing computers to do extra work per email; and Microsoft's version requires senders to decipher distorted text."

Here's an earlier post.

Here's another story: Will Users Donate a Penny Per Email to Fight Spam, Yahoo Wonders, which notes
"It’s not clear how much the proposal would help, however, since so much of the spam is now sent using botnets, which are networks of zombie PCs whose owners have no idea their computers are part of a massive spamming organization."

Sunday, July 26, 2009

House flipping fraud in Florida

I received the following email from Eric Budish, the Chicago market designer:

"I came across a neat investigative journalism feature on a form of mortgage fraud called “house flipping” .

The newspaper reviewed 19mm Florida real-estate transactions, and found that 50,000 involved appreciation of 30%+ in less than 90 days. They investigate one fraud circle in depth, and have features on the local police, lenders, etc.

What makes the fraud tick is that the buyer can finance at the new price. So if A legitimately buys a house for 100, then immediately sells it to his buddy B for say 150, B can get a mortgage against the 150 (especially if his buddy C is a real-estate appraiser). Even if B makes a small down payment on the 150, together A and B have extracted 50 minus downpayment minus fees in cash from the transaction. B never intends to repay the 150, and B’s mortgage lender is severely under collateralized.

The reason I think this is all so interesting is that the fraud is only possible because houses are idiosyncratic, but not too idiosyncratic. If houses were perfect substitutes, then A, B and C couldn’t trick the mortgage lender about house values (50,000 flips is a lot, and likely an underestimate, but still less than 1% of transactions). If houses were substantially more idiosyncratic, then banks would never have gotten in the habit of financing 90%+ of the purchase price in the first place: in the event of foreclosure they’d have to worry about whether the right types of buyers would be in the market. Put differently, the housing market is not too thick, but not too thin."