Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Tuesday, February 17, 2015

Aaron Roth on differential privacy

Penn News covers a recent presentation:
An Introduction to ‘Differential Privacy,’ from Penn Professor Aaron Roth

The ability to amass, store, manipulate and analyze information from millions of people at once has opened a vast frontier of new research methods. But, whether these methods are used in the service of new business models or new scientific findings, they also raise questions for the individuals whose information comprises these “big data” sets. Can anyone really guarantee that these individuals’ information will remain private?
Aaron Roth, assistant professor in the Department of Computer and Information Science in the University of Pennsylvania’s School of Engineering and Applied Science, is trying to answer that question.
Along with colleagues from Harvard University, Microsoft Research, Pennsylvania State University and IBM Research, he presented some of his ongoing research on “differentially private” algorithms at the American Association for the Advancement of Science Annual Meeting in San Jose, Calif.
The differential privacy approach ensures that people querying big databases see only representative trends and can’t game their searches to reveal information about specific individuals in the set.
This technique has implications for companies like Google, Facebook and Amazon, businesses which depend on gleaning insights from trends in users’ behavior while maintaining their trust that their personal data is not being exploited or compromised.  But it could extend beyond the tech giants and into the world of scientific publishing by addressing the so-called “false discovery” problem, where some scientific findings seem valid but cannot be reproduced. Such false discoveries stem from “overfitting” hypotheses to outlying individuals in a dataset, rather than generalizable trends that apply the population at large.        
“There’s this idea where the more privacy you have the less useful the data is,” Roth says. “There’s some truth to that, but it’s not quite so simple. Privacy can also increase the usefulness of data by preventing this kind of overfitting”
The “different” in its name is a reference to the guarantee a differentially private algorithm makes. Its analyses should remain functionally identical when applied to two different datasets: one with and one without the data from any single individual.
“The math behind differentially private algorithms gives you a provable guarantee that, even if you know everything else about the dataset, you can’t learn anything new about my data in particular,” Roth says. “The algorithm essentially can’t tell whether my data is in the set in the first place.”      
The very nature of large data sets makes privacy a dicey proposition. Stripping those records of common identifiers, such as names or email addresses, still leaves a trove of information that could, with some effort, be used to pinpoint data from a specific individual. Such “ad hoc” methods of protecting privacy in a data set are ultimately doomed in that the set’s owner can never predict what outside information would-be attackers might use to reverse-engineer the hidden data. 
“For example, at one point, it was shown that an attack on Amazon’s recommendation algorithm was possible,” Roth says, “If I knew five or six things you bought on Amazon, I could buy those same things, and all of a sudden, we’re now the two most similar customers in Amazon's recommendation algorithm. I could then start seeing what else you were buying, as whatever you bought would then be recommended to me.”
A differentially private recommendation algorithm would defend against such an attack because it would discount idiosyncratic trends as being just that: only representing an individual’s data rather than something that is statistically valid for the entire set.
Beyond protecting customers’ private information, such an algorithm would also be better at its job. 
“You don’t actually want something that is good at predicting what people have bought historically; that may just be an example of you overfitting the data,” Roth says. “You want something that predicts what they are going buy tomorrow — things that are not in the set yet, and the same applies to scientific findings.”
Generating and collecting data is often the most expensive and time-consuming part of a scientific study, so datasets are often shared among scientists. This altruism has a hidden downside, however, as it disrupts the scientific publishing system’s standard method of ascribing significance to a particular finding.      
“The way you normally determine if a finding is significant is by computing its ‘p-value,’” Roth says. “This tells you the probability that the correlation you observe would appear just as significant if it occurred by random chance. The standard level for significance is .05, but that also means that if you test 100 hypotheses, even if they’re all wrong, you’d expect five of them would appear significant.
“There are ways to correct for the fact that you test many hypotheses, but the existing methods only work if you came up with all of your hypotheses before anyone ever looked at the data. If scientists re-use the same datasets, these guarantees disappear.”
If rather than accessing raw data, scientists share access to a dataset but only allow it to be queried through a differentially private algorithm, then they recover the ability to protect against “false discoveries” that come from over-fitting the data. Roth and his colleagues Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi and Omer Reingold have theory proving the effectiveness of this approach, which will be presented this summer at the ACM Symposium on the Theory of Computing.
“You always want to have the conclusions you draw to be based on true statistical patterns in the data, rather than the idiosyncrasies of a single individual in the set,” Roth says. “This is the same thing you want in private data analysis, and this why differential privacy can also prevent false discoveries.”

Friday, February 13, 2015

Differential privacy and the market for data, at the AAAS meeting tomorrow

If you are at the AAAS meetings in San Jose tomorrow, and interested in how the new data environment interacts with privacy concerns, you might want to check out this session::

Saturday, 14 February 2015: 10:00 AM-11:30 AM
Room LL21C (San Jose Convention Center)
To realize the full potential of big data for societal benefit, we must also find solutions to the privacy problems raised by the collection, analysis, and sharing of vast amounts of data about people. As discussed in the 2014 AAAS Annual Meeting session "Re-Identification Risk of De-Identified Data Sets in the Era of Big Data," the traditional approach of anonymizing data by removing identifiers does not provide adequate privacy protection, since it is often possible to re-identify individuals using the seemingly innocuous data that remains in the dataset together with auxiliary information known to an attacker and/or present in publicly available datasets. Differential privacy offers the possibility of avoiding such vulnerabilities. It provides a mathematically rigorous formalization of the requirement that a datasharing or analysis system should not leak individual-specific information, regardless of what auxiliary information is available to an attacker. A rich body of work over the past decade has shown that a wide variety of common data analysis tasks are compatible with the strong protections of differential privacy, and a number of promising efforts are underway to bring these methods to practice. In addition, differential privacy has turned out to have powerful implications for questions outside of privacy, in areas such as economics and statistics. This symposium will discuss these facets of differential privacy.
Organizer:
Salil Vadhan, Harvard University 
Co-Organizer:
Cynthia Dwork, Microsoft Research, Silicon Valley 
Speakers:
Aaron RothUniversity of Pennsylvania 
An Introduction to Differential Privacy
Sofya RaskhodnikovaPennsylvania State University 
Differentially Private Analysis of Graphs and Social Networks
Moritz HardtIBM Almaden Research Center 
Guilt-Free Interactive Data Analysis

Saturday, February 7, 2015

Differential Privacy: an appreciation of Cynthia Dwork

On Thursday I heard Cynthia Dwork talk about differential privacy in San Diego, and here is an appreciation of her at the CS blog called Godel's lost letter and P=NP by Dick Lipton and Ken  Regan:

Cynthia Dwork and a Brilliant Idea

Here's their introductory paragraph:
"This concept is brilliant. It is, in our opinions, one of the greatest definitions of this century. Okay the century is just fifteen years old, but it is a terrific notion. She deserves credit for seeing that this simple one would have such far reaching consequences and for following through on it. Her paper—still in the 12-page ICALP 2006 proceedings format—begins with three prose pages of motivation that are a breath of fresh air. The definition originated from work with Frank McSherry in a prior paper also with Kobbi Nissim and Adam Smith, and has grown to fill a book joint with Aaron Roth."

Monday, October 27, 2014

Microeconomics and computer science at Cornell: auctions and privacy

Gates Hall isn't a bad name for a building in which to schedule a seminar in which economics and computers are the subject...

Joint Microeconomics and Computer Science Workshop, Aaron Roth

Mon, 10/27/2014 - 4:00pm

Aaron Roth

University of Pennsylvania

310 Gates Hall

Downloads

Event Categories: Microeconomic Theory
Private and (Asymptotically) Truthful Combinatorial Auctions


Aaron Roth
Monday, October 27, 2014
4:00pm 310 Gates Hall
Abstract:
Consider the following problem: a large public electricity provider (say, in California) faces a situation where demand for electricity might, without intervention, rise above the utility's ability to generate power. Rather than resorting to rolling brown-outs, however, a forward-thinking ballot initiative has given the utility the ability to shut off the air-conditioners of individual buildings remotely. Using this ability, the utility hopes, they might be able to coordinate shut-offs so that nobody is ever uncomfortable (say, guaranteeing that every apartment's air conditioner runs during some 10 minute interval of every hour in which the apartment is occupied), but so that peak electricity usage never rises above peak power production.
While this combinatorial optimization approach to the problem might be preferable to rolling brown outs, it introduces a privacy concern: the utility will now be making decisions as a function of when customers report they are at home, which can be highly sensitive information. Is there a way to solve this problem so that no coalition of customers j != i can learn about the schedule of customer i? Moreover, can we pair such a solution with a schedule of electricity rates so that no player has more than a vanishing incentive to misreport their demands?

We show that the answer is "yes" to this problem, and to a broad class of welfare maximization problems that can be posed as convex programs. We give a method to compute near optimal solutions to such problems under the formal constraint of ``differential privacy'', while giving Walrasian-equilibrium like item/resource pricings which result in truth-telling as an asymptotically dominant strategy, as the size of the economy grows large (in a mild way). 

Thursday, October 23, 2014

Approximately school optimal student-truthful school matching via differential privacy

The paper below  by Kannan, Morgenster, Roth and Wu is being presented today at 12:15 at the Stanford computer science theory seminar (in Gates 4b).


Wednesday, March 6, 2013

Differential Privacy and Economics and the Social Sciences


Differential Privacy and Economics and the Social Sciences

SIMONS FOUNDATION

Thursday, March 7, 2013 from 9:00 AM to 9:30 PM (EST)

New York, NY


A day devoted to Economics and Social Sciences and the Science of Privacy will take place onThursday, March 7th in New York City. This event is funded by the Simons Foundation and the Alfred P. Sloan Foundation.
Tutorial on Differential Privacy 9:30 - 11:30 AM
LOCATION: Simons Foundation
Speaker: Aaron Roth (Computer Science, University of Pennsylvania).

Privacy and Issues in Mechanism Design 1:15 - 3:45 PM
LOCATION: Simons Foundation
Presentation by Alvin Roth (Economics, Stanford) on privacy issues in market design, a discussion, co-organized by Mallesh Pai (Economics, University of Pennsylvania) and Eric Budish (Booth School of Business, University of Chicago), on the issues raised by Roth.
Talks by by Scott Kominers (Becker Friedman Institute, University of Chicago) and Tim Mulcahy (NORC, University of Chicago

Topic-Specific Talks 4:45 - 5:50 PM
LOCATION: Simons Foundation
Talks by Julia Lane (American Institutes for Research), Ben Handel (Economics, Berkeley), and Hal Salzman (Public Policy, Rutgers) on privacy aspects of their research.

Evening Session 8:00 - 9:30 PM
LOCATION: Simons Foundation
An evening plenary session featuring a presentation by NYU Professor Steven Koonin, Director of the nascent Center for Urban Science and Progress, "a unique public-private research center that uses New York City as its laboratory and classroom to help cities around the world become more productive, liveable, equitable and resilient."  Remarks by Micah Altman (MIT and Brookings Institution) and Felix Wu (Benjamin Cardozo School of Law)

Registration is free and open to the public, on a first-come first-served basis. By registering you will confirm your attendance.

Wednesday, January 9, 2013

Death and privacy (and transplantation and market design)

The latest issue of the AJT Report, a news summary in the American Journal of Transplantation, concerns The Unintended Consequences of Privacy, reporting on a recent decision by the Social Security Administration to decline to share some data about deaths, due to privacy concerns.

Apparently this decision will mean that the Health Resources and Service Administration (HRSA) will no longer be able to disclose to transplant centers the deaths that occur of patients on the waiting list for deceased donor organs, or after transplantation.

This could turn into a big problem, unless it is resolved soon, because information about deaths is critical for managing the transplant system at all levels.

"According to Patricia W. Potrzebowski, PhD, executive director of the National Association of Public Health Statistics and Information Systems (NAPHSIS), “SSA never had the authority
to release state death records through the public DMF. This is because state records are governed by state statutes and regulations. State statutes and regulations vary as to who may access death record information. In some states, death record information is publicly available; in others, even the fact of death is held in strict confidence.”
***********

It is becoming increasingly clear that privacy is a big issue in market design in general. The AJT report coincidentally juxtaposes the two issues by including some Nobel news at the end of the report.

Monday, November 19, 2012

Game theory and differential privacy

Here's a lecture on game theory and differential privacy, by Aaron Roth, an up and coming computer scientist whose work I've followed for a long time


DIMACS Tutorials - Oct 24, 2012: Aaron Roth - Game Theory and Differential Privacy

Wednesday, October 10, 2012

Privacy of auction bids

Steve Leider writes:


I came on an interesting market design anecdote in a larger article about cryptography (http://arstechnica.com/security/2012/09/quantum-cryptography-yesterday-today-and-tomorrow/5/), and I found out more details here (http://ercim-news.ercim.eu/trading-sugar-beet-quotas-secure-multiparty-computation-in-practice)

The basic story is that all the production of sugar beets in Denmark get sold to a monopsonist firm Danisco, and at the start of the year each farmer buys rights to sell a certain quantity of beets to Danisco at harvest, based on their production estimates.  Often when harvest comes farmers end up wanting to buy or sell rights, however in the past this has been difficult to do in a centralized fashion because a double auction for rights would reveal too much information to Danisco and enhance its bargaining power versus the farmer's association.  Recently they instituted a system where they could essentially submit their bids in encrypted form to an algorithm that can compute the market clearing price and exchanges without needing to decrypt individual bids.

Steve

Tuesday, July 24, 2012

Incentives and privacy

A new paper by three computer scientists and an economist reports on some connections between privacy and incentive compatibility.

MECHANISM DESIGN IN LARGE GAMES: INCENTIVES AND PRIVACY
by
MICHAEL KEARNS, MALLESH M. PAI, AARON ROTH and JONATHAN ULLMAN
July 18, 2012


ABSTRACT
We study the design of mechanisms satisfying two desiderata— incentive compatibility and privacy. The first, requires that each agent should be incentivized to report her private information truthfully. The second, privacy, requires the mechanism not reveal ‘much’ about any agent’s type to other agents. We propose a notion of privacy we call Joint Differential Privacy. It is a variant of Differential Privacy, a robust notion of privacy used in the Theoretical Computer Science literature. We show by construction that such mechanisms, i.e. ones which are both incentive compatible and jointly differentially private exist when the game is ‘large’, i.e. there are a large number of players, and any player’s action affects any other’s payoff by at most a small amount. Our mechanism adds carefully selected noise to no-regret algorithms similar to those studied in Foster-Vohra [FV97] and Hart-Mas-Colell [HMC00]. It therefore implements an approximate correlated equilibrium of the full information game induced by players’ reports.
*********

As I understand it, adding appropriate randomness to regret learning algorithms doesn’t harm their long term equilibration properties, and gives them good privacy properties, which together give them good incentive properties.

Thursday, September 1, 2011

Theory of privacy

Several courses are being offered that deal with new theories of data privacy, concerning how to usefully answer queries from a database while preserving the privacy of individuals in the database, even if the queries can be combined with auxiliary information from other data sources.

These concerns arise in response to the practical observation that even "anonymized" databases can often be "de-anonymized" by combining them with other information.

All the course sites below link to papers in the literature, and, at least at this early stage of development, there seems to be a great deal of consensus on which papers to cover.

The Algorithmic Foundations of Data Privacy taught by Aaron Roth this Fall at Penn

Algorithmic Challenges in Data Privacy taught at Penn State by Sofya Raskhodnikova and Adam D. Smith

Foundations of Privacy taught at Weizmann by Moni Naor.

Monday, June 20, 2011

Economics of Privacy, continued

Privacy is complicated, and this is made clear when we think about buying and selling personal data in ways that preserve privacy. Suppose, for example, that I ask you whether I can have access to your medical records for $1,000. If you say yes, I'll learn your medical history, but if you say no, I can already draw some conclusions about the likelihood that you have some medical condition that you would like to keep private.

Here is an announcement for a Postdoc in Economics of Privacy at UPenn, with application details at the link.

"Applications are invited for a postdoc position in the theory of privacy and economics at the University of Pennsylvania. An outline of the hosting project is below.

The ideal candidate will have a Ph.D. in Computer Science, Economics, or Statistics and a strong record of publication. 
...
"In the last decade private data has become a commodity: it is gathered, bought and sold, and contributes to the primary business of many Internet and information technology companies. At the same time, various formalizations of the notion of ‘privacy’ have been developed and studied by computer scientists. Nevertheless, to date, we lack a theory for the economics of digital privacy, and we propose to close this important gap.

"Concretely, we propose to develop the theory to address the following questions:

"How should a market for private data be structured? How can we design an auction that accommodates issues specific to private data analysis: that the buyer of private data often wishes to buy from a representative sample from the population, and that individuals value for their privacy can itself be a very sensitive piece of information?

"How should we structure other markets to properly account for participants concerns about privacy? How should we properly model privacy in auction settings, and design markets to address issues relating to utility for privacy?

"Studying economic interactions necessitates studying learning – but what is the cost of privacy on agent learning? How does the incomplete information that is the necessary result of privacy preserving mechanisms affect how individuals engaged in a dynamic interaction can learn and coordinate, and how do perturbed measurements affect learning dynamics in games? How can market research be conducted both usefully and privately?

"Our investigation of these questions will blend models and methods from several relevant fields, including computer science, economics, algorithmic game theory and machine learning.

"The proposed research directly addresses one of the most important tensions that the Internet era has thrust upon society: the tension between the tremendous societal and commercial value of private and potentially sensitive data about individual citizens, and the interests and rights of those individuals to control their data. Despite the attention and controversy this tension has evoked, we lack a comprehensive and coherent science for understanding it. Furthermore, science (rather than technology alone) is required, since the technological and social factors underlying data privacy are undergoing perpetual change. Within the field of computer science, the recently introduced subfield of privacy preserving computation has pointed the way to potential advances. The proposed research aims to both broaden and deepen these directions."

Friday, June 3, 2011

Privacy and Economics

It appears that "privacy and economics" may be an emerging topic in computer science, to judge from a postdoc mentioned on a cs blog I follow:


Differential Privacy Postdoc at UPenn


"We are building a differential privacy group at UPenn! Below is the announcement for a postdoc position in the theory and practice of differential privacy. If you are a theorist who wants to actually put your contributions into practice as well, please apply.

"There will be another announcement soon for another pure-theory postdoc position in the exciting new area of "privacy and economics". Stay tuned, and contact me if you are interested."

Tuesday, November 9, 2010

Markets for buying and selling privacy

Noam Nisan at AGT/E reports on New Papers on Auctions by computer scientists, including this one that I find interesting for several reasons: it is about market design for markets in which privacy is sold.


Selling Privacy at Auction by Arpita Ghosh and Aaron Roth
We initiate the study of markets for private data, through the lens of differential privacy. Although the purchase and sale of private data has already begun on a large scale, a theory of privacy as a commodity is missing. In this paper, we propose to build such a theory. Specifically, we consider a setting in which a data analyst wishes to buy information from a population from which he can estimate some statistic. The analyst wishes to obtain an accurate estimate cheaply. On the other hand, the owners of the private data experience some cost for their loss of privacy, and must be compensated for this loss. Agents are selfish, and wish to maximize their profit, so our goal is to design truthful mechanisms. Our main result is that such auctions can naturally be viewed and optimally solved as variants of multi-unit procurement auctions. Based on this result, we derive auctions for two natural settings which are optimal up to small constant factors:
1. In the setting in which the data analyst has a fixed accuracy goal, we show that an application of the classic Vickrey auction achieves the analyst’s accuracy goal while minimizing his total payment.
2. In the setting in which the data analyst has a fixed budget, we give a mechanism which maximizes the accuracy of the resulting estimate while guaranteeing that the resulting sum payments do not exceed the analysts budget.
In both cases, our comparison class is the set of envy-free mechanisms, which correspond to the natural class of fixed-price mechanisms in our setting.
In both of these results, we ignore the privacy cost due to possible correlations between an individuals private data and his valuation for privacy itself. We then show that generically, no individually rational mechanism can compensate individuals for the privacy loss incurred due to their reported valuations for privacy.

Wednesday, June 9, 2010

Personal data for sale

Web Start-Ups Offer Bargains for Users’ Data

The budgeting Web site Mint.com, for example, displays discount offers from cable companies or banks to users who reveal their personal financial data, including bank and credit card information. The clothing retailerBluefly could send offers for sunglasses to consumers who disclose that they just bought a swimsuit. And location-based services like Foursquare and Gowalla ask users to volunteer their location in return for rewards like discounts on Pepsi drinks or Starbucks coffee.

These early efforts are predicated on a shift in the relationship between consumer and company. Influenced by consumers’ willingness to trade data online, the sites are pushing to see how much information people will turn over."...


"New companies including WeShop, Aprizi, Blippy and Dopplr are trying to exploit the data that people seem so willing to give up. Some are even allowing shoppers to set what terms they want — free shipping, half-price discounts, only fair-trade products. They can also list what they are shopping for, like a gray cashmere sweater under $100, for instance, and let the retailers fight it out for the right to make a sale.


“The whole privacy debate has grown up around people using your data without your permission,” said Antony Lee, chief executive of WeShop. “If you want to use your data to your benefit, that’s for you to do,” Mr. Lee said."

Wednesday, December 30, 2009

Airport security and privacy

Recent discussions of airport security in the post underpants-bomber era make it clear that privacy is a complex issue. For example, if an airport screener is going to see a digital image of what you look like under your clothes, is your privacy preserved better if the screener can't also see your face? If the screener is in a remote viewing room?

Debate Over Full-Body Scans vs. Invasion of Privacy Flares Anew After Incident
"The technology exists to reveal objects hidden under clothes at airport checkpoints, and many experts say it would have detected the explosive packet carried aboard the Detroit-bound flight last week. But it has been fought by privacy advocates who say it is too intrusive, leading to a newly intensified debate over the limits of security."
...
"But others say that the technology is no security panacea, and that its use should be carefully controlled because of the risks to privacy, including the potential for its ghostly naked images to show up on the Internet."
...
"“I’m on an airplane every three or four days; I want that plane to be as safe and secure as possible,” Mr. Chaffetz said. However, he added, “I don’t think anybody needs to see my 8-year-old naked in order to secure that airplane.” "
...
"Images produced by the machines in the days before privacy advocates began using phrases like “digital strip search” could be startlingly detailed. Machines used in airports today, however, protect privacy to a greater extent, said Kristin Lee, a spokeswoman for the T.S.A.
Depending on the specific technology used, faces might be obscured or bodies reduced to the equivalent of a chalk outline. Also, the person reviewing the images must be in a separate room and cannot see who is entering the scanner. The machines have been modified to make it impossible to store the images, Ms. Lee said, and the procedure “is always optional to all passengers.” Anyone who refuses to be scanned “will receive an equivalent screening”: a full pat-down."

Monday, August 10, 2009

Secondary market for prescriptions: a privacy-repugnant transaction

The information on your drug prescriptions, including your name, can be bought and sold, reports Milt Freudenheim in the NY Times: And You Thought a Prescription Was Private

"... prescriptions, and all the information on them — including not only the name and dosage of the drug and the name and address of the doctor, but also the patient’s address and Social Security number — are a commodity bought and sold in a murky marketplace, often without the patients’ knowledge or permission.
That may change if some little-noted protections from the Obama administration are strictly enforced. The federal stimulus law enacted in February prohibits in most cases the sale of personal health information, with a few exceptions for research and public health measures like tracking flu epidemics."
...
"Selling data to drug manufacturers is still allowed, if patients’ names are removed. But the stimulus law tightens one of the biggest loopholes in the old privacy rules. Pharmacy companies like Walgreens have been able to accept payments from drug makers to mail advice and reminders to customers to take their medications, without obtaining permission. Under the new law, the subsidized marketing is still permitted but it can no longer promote drugs other than those the customer already buys. "

Loss of privacy, particularly medical privacy, is a negative externality to some transactions that is increasingly seen as making them repugnant.

Tuesday, April 28, 2009

Private sales of artworks, in a down market

Auctions are good for price discovery, and price discovery is particularly important for "common value" goods, i.e. goods whose value to each buyer may be substantially influenced by what other buyers are willing to pay. So, what should you do if you want to unload a common value asset but want to avoid disseminating potentially bad news about what it might be worth? The NY Times reports More Artworks Sell in Private in Slowdown .

"During good times, an auction is the obvious choice for any collector wanting to sell a work of art. But as the recession takes its toll, many collectors have changed strategies and retreated to the more hidden, and potentially less lucrative, world of private sales.
"For many sellers, the driving factor is fear. Fear that their friends will discover they need money. Fear that if a Picasso or Warhol, Monet or Modigliani does not sell at auction, it will be considered yesterday’s goods.
If they do not have to, fewer collectors are putting their holdings up for auction at Sotheby’s and Christie’s, where prices and profits have plummeted. But executives at both houses say business in their private-sale departments has more than doubled in recent months. "
...
" “The game has definitely shifted,” said Christopher Eykyn, a former head of Impressionist and modern art at Christie’s who is now a dealer in New York. “A lot of clients don’t want to be seen selling, so the private route is suddenly more attractive.” "
...
"There are exceptions, of course. Estates continue to go to auction because executors have a fiduciary responsibility and prices are rarely challenged after public sales.
For the auction houses, private sales are lucrative and inexpensive. Generally Sotheby’s and Christie’s charge 5 to 10 percent of the purchase price of an artwork, depending on its value and the agreement with the seller. (If a work goes to auction the houses charge sellers 25 percent of the first $50,000, 20 percent of the next $50,000 to $1 million and 12 percent of the rest.) Money earned from private transactions comes cheap, without expenses like advertising, insurance and shipping associated with auctions.
The dismal sales in New York in November, when night after night paintings by Monet and Matisse, Bacon and Warhol went unsold, meant big losses for Sotheby’s and Christie’s, which had a financial interest in most of this expensive art in the form of guarantees, undisclosed sums paid to sellers regardless of a sale’s outcome.
After the fall auctions, both houses immediately began changing the way they conduct business. In addition to announcing hundreds of layoffs, with perhaps more to come, they mostly halted the practice of guarantees and stopped giving consignors a cut in the fees they charge buyers. The days of publishing luscious catalogs have ended as well.
For their part, dealers say that their phones started ringing after Sept. 15, the day Lehman Brothers filed for bankruptcy. “It’s been pretty steady ever since,” said Steven P. Henry, director of the Paula Cooper Gallery in Chelsea. He said he had been getting inquiries about selling art from people who had investments with Bernard L. Madoff, or who had seen the value of their stock or real estate assets collapse."

Tuesday, April 21, 2009

Giving anonymously, through an intermediary

I recently blogged about a service set up to enable people to give (relatively) small gifts anonymously, Giving anonymously, for a fee .

Larger gifts, e.g. to university endowments, are also often given anonymously. But "anonymous" is a relative term, and often someone at the gift-receiving institution knows who the donor is. (Here at Harvard, my HBS office is in a building called Baker Library, whose southern side now sports a new wing called Bloomberg. It was anonymous for a while; all we knew was that the donor wished his identity to remain unknown until he had completed his election for a second term as mayor of New York.)

Now a number of gifts to universities have been made behind a deeper than usual veil of anonymity: Mystery donors give over $45M to 9 universities.

"A mystery is unfolding in the world of college fundraising: During the past few weeks, at least nine universities have received gifts totaling more than $45 million, and the schools had to promise not to try to find out the giver's identity.
One school went so far as to check with the IRS and the Department of Homeland Security just to make sure a $1.5 million gift didn't come from illegal sources.
"In my last 28 years in fundraising ... this is the first time I've dealt with a gift that the institution didn't know who the donor is," said Phillip D. Adams, vice president for university advancement at Norfolk State University, which received $3.5 million.
The gifts ranged from $8 million at Purdue to $1.5 million donated to the University of North Carolina at Asheville. The University of Iowa received $7 million; the University of Southern Mississippi, the University of North Carolina at Greensboro and the University of Maryland University College got $6 million each; the University of Colorado at Colorado Springs was given $5.5 million; and Penn State-Harrisburg received $3 million.
It's not clear whether the gifts came from an individual, an organization or a group of people with similar interests. In every case, the donor or donors dealt with the universities through lawyers or other middlemen. Some of the money came in cashier's checks, while other schools received checks from a law firm or another representative.
All the schools had to agree not to investigate the identity of the giver. Some were required to make such a promise in writing."

Saturday, March 14, 2009

Personalized advertising

As the world wide web is increasingly accessed by mobile devices (equipped with GPS, and used largely by a single individual) the ability to narrowly target advertisements is increasing: Advertisers Get a Trove of Clues in Smartphones .
"Advertisers will pay high rates for the ability to show, for example, ads for a nearby restaurant to someone leaving a Broadway show, especially when coupled with information about the gender, age, finances and interests of the consumer. "

"Applications that use GPS can offer even more specificity, including Loopt, Yelp, Urbanspoon, Where and almost any iPhone application that shows the pop-up box saying it “would like to use your current location.” Several firms are experimenting with a program called AisleCaster that can offer specials based on a person’s exact location in a supermarket aisle or mall.
Advertising systems can track not only the location of the phone, but also that person’s travel pattern: uptown New York to Nob Hill in San Francisco, for instance."

"For now, there are not enough people using smartphones to make it worthwhile for advertisers to use highly specific criteria. But as more people switch to smartphones, that will happen more frequently."

The article also discusses the privacy issue, and whether customers will be "creeped out" by ads that reveal how specifically they have been targeted. (I wonder if people will find it equally creepy to be targeted by a computer from a big database that no human looks at as by a human being at a dinner time call center...)