Computers Gone Wild: Impact and Implications of Developments in Artificial Intelligence on Society was an informal discussion that took place at Harvard Law School on December 8th, 2011. Hosted by Jonathan Zittrain, Marin Soljačić and the Berkman Center for Internet & Society, we brought together eighteen mostly local guests to discuss the ways that AI is changing society. Unlike futuristic predictions involving the Singularity or the underlying technology, this workshop explored current technology. Sessions included discussions on warfare, finance, education, and labor. Below is a list of attendees and a summary of the discussion.
- Ryan P. Adams – Assistant Professor of Computer Science, School of Engineering of Applied Sciences, Harvard University.
- Susan Athey – Professor of Economics, Department of Economics, Harvard University.
- David Autor - Professor and Associate Department Head, Department of Economics, MIT.
- Gabriella Blum – Rita E. Hauser Professor of Human Rights and Humanitarian Law, Harvard Law School.
- Daniel Dennett – Austin B. Fletcher Professor of Philosophy, Tufts University.
- Peter Galison – Joseph Pellegrino University Professor, Department of the History of Science, Harvard University
- Andrew Lo – Harris & Harris Group Professor, Director, MIT Laboratory for Financial Engineering, MIT.
- John Markoff – Journalist, The New York Times.
- Andrew McAfee – Principal Research Scientist at Center for Digital Business, MIT Sloan School of Management.
- John Palfrey – Henry N. Ess III Professor of Law and Vice Dean, Library and Information Resources, Harvard Law School, Harvard University.
- David Parkes – Gordon McKay Professor of Computer Science, School of Engineering and Applied Sciences, Harvard University.
- Steven Pinker - Harvard College Professor and Johnstone Family Professor, Department of Psychology, Harvard University.
- Lisa Randall – Frank B. Baird, Jr., Professor of Science, Department of Physics, Harvard University.
- Stuart Shieber – James O. Welch Jr. and Virginia B. Welch Professor of Computer Science, School of Engineering and Applied Sciences, Harvard University.
- Marin Soljačić - Professor of Physics, Physics Department, MIT.
- Jeannie Suk – Professor of Law, Harvard Law School, Harvard University.
- Jonathan Zittrain - Professor of Law, Harvard Law School/ Harvard Kennedy School of Government, Professor of Computer Science, Harvard School of Engineering and Applied Sciences, Harvard University.
We discussed the modern military use of drones and other semi-autonomous, non-human forms of warfare. There are some ways that robot technology represents merely a new technology in war, like crossbows or gun powder. However, as more and more decisions are aided by machines, there is some evidence that reliance on robots makes humans less likely to overrule in favor of their own judgment in circumstances not anticipated by the AIs. For example, the crash of Air France 447 was traced to the pilots not being trained to handle a situation when the autopilot was not functioning, and not trusting the non-autopilot instruments.
Internationally, forty-five states currently have drone technology, and it is becoming increasingly accessible to non-states. The uses of drones or very small surveillance robots for criminal purposes may become normal, and access to these technologies could increase the power of non-state actors. The combination of WMDs and drone technology could mean that a terrorist organization could deploy a weapon without putting a person on the ground.
Additionally, the lower prices of small surveillance robots and memory storage, and the rise of machine learning may mean that it could become commonplace to monitor activities of civilians at all times to determine appropriate targets. Imagine a microphone near every kitchen table in a small village in Afghanistan, listening for “insurgent” activity. Law governing surveillance of activities in plain view currently typically relies on the fact that the cost and effort of monitoring and processing information is high enough that mass data collection is not effective. What should happen when those assumptions no longer hold?
Another question raised by the military use of AI is how to evaluate decisions made by non-human actors. Would countries be responsible for explaining the variables they used and the algorithms that calculate drone decisions? In the past, increases in technological progress have decreased war casualties and, in the case of nuclear weapons, deterred countries from going to war. Wars may become more common as the potential for both collateral and symmetric loss of human life decreases. See, for example, Congress’s debate about drones in Libya – where the lack of human involvement was a reason why some politicians were wiling to get involved. How will norms related to killing change if there is no potential for a human to be harmed on the side of the attacker?
Recent flash crashes have shown the role of algorithms and high frequency trading in the New York Stock Exchange, and the potential for disaster. For example, in August 2007, a fifteen-minute glitch caused by programmers using a placeholder value of a penny in an algorithm triggered thousands of sell orders. Stock prices of some companies dropped from forty dollars to less than a dollar in minutes, and the NYSE rolled back a certain amount of trades.
High frequency trading is a form of algorithmic trading that is dependent on the ability to trade small amounts of stock quickly in order to make small amounts of money. Trades can be made in under a millisecond, and firms are now competing for servers as physically close to the stock exchange as possible in order to complete trades faster. It’s dubious that high frequency trading is adding significant value or benefit to the market (besides making a small number of people very rich). There is a definite wealth transfer between those with the technology and those without, and increased volatility – perhaps counterintuitive to the notion that quicker trades make for better liquidity and stability.
Inequality between firms with algorithmic potential and those without it is a significant concern. The algorithms are not patentable so firms keep them as trade secrets, and there is a definite gap between firms that can afford to develop algorithms and firms that can’t. Firms with technology will continue to make more money than those without, polarizing the market even further.
Given that flash crashes have already happened, a large portion of this session was devoted to discussing potential methods of regulation, including a tax on trades (“Tobin tax”) or a requirement that trades be posted for a certain amount of time. The Tobin tax has serious downsides, as there are reasons other than algorithmic or high frequency trading for a firm to make many trades quickly; for example, pension funds often need to liquidate lots of stock over a brief time frame. Posting trades for a small amount of time (say one second) has less obvious downsides, and could prevent crashes of the type that happened in 2007.
Because of the secrecy surrounding the algorithms, it has not been possible to measure the systemic risk posed by many automated traders acting at the same time. However, it seems that if limitations on trading are not imposed, regulators should attempt to determine the total risk and whether the rewards are worth it.
The type of jobs that computers are able to do has changed significantly over the past couple of years. For example, law firms used to hire associates to do document review for discovery, but now can use computer programs. White collar jobs are becoming increasingly susceptible to automation.
During past revolutions in labor, technologies that improve productivity have not destroyed jobs entirely; they merely move them to different sectors. However, most of the jobs that were replaced were not knowledge workers, but blue collar or manual labor. Because of this difference in the type of jobs replaced, it’s possible that this labor shift won’t result in the same sorts of job movements as past shifts in labor. There is a fundamental disagreement about whether that trend will hold up in the future – whether robots and AI will destroy jobs or whether the jobs will just move to other areas. Some argued that the new jobs created may be “below human dignity,” underpaid or not ideal, but will exist – others saw more of a move towards robots in general, with humans not finding new areas of work.
Another key theme was the question of the appropriateness of computers or robots for jobs that require binding decision-making. So far, most of the advances in labor markets related to computers have been improving productivity, with humans still in control. However, as machines become more sophisticated, it’s possible that they will make fewer errors than similarly situated humans. For example, a parole board in Israel was found to parole 65 percent of prisoners seen at the beginning of the day but the number dropped to near zero by the end of the session when the judges were about to break for lunch. Robots, the claim goes, may be in a better position to make those kinds of decisions– they wouldn’t be swayed by emotional appeals, biases or time of day, and could evaluate based on a specific set of variables. If robots can do better than the equivalent human, should we be prepared to replace parole board members? How do we handle accountability for robot justices? There was a spirited split within the group on this issue.
There’s a certain discomforting factor about life and death decisions being made by algorithmic processes, even given the foibles of human decision-making. Are there cases where we would want humans to make a decision, even if they are worse at it than an algorithm might be?
The workshop then had a mini-session about education and the role of the university professor as technology progresses. Examples of teaching affected by technology included Stanford’s AI class (with 54,000 people taking the class via the Internet) and the development of computer simulations of experiments. Using computers to speed up and aid research in some fields is easy; however, technological progress becomes more complicated when it is hard to scale the student-teacher experience.
Technology could help democratize the educational experience; however, some of the spontaneity and personal connections between professors and students might be lost. Allowing for universal access to educational materials may be beneficial, but how do you ensure quality control and preserve the ability of students to interact personally?
It’s easy to make doomsday predictions without understanding the science, or to suggest regulation as a kneejerk response, but it’s important to realize that it’s hard to intervene without data.
For the Finance case, an intervention seemed most helpful because there were a clearly defined set of problems and actors. In military and labor, fundamental uncertainty about the next steps along the AI path meant that regulation (or even predictions) seemed unwise. However, all three cases made the participants wish for a better understanding of the systematic risks involved with changes that have already gone on, in order to better prepare for the future.
Summary by Kendra Albert.