SUNDAY, 11 FEBRUARY 2018
As we marvel at the latest gadgets, technology is already working behind the scenes to bring us the ‘Next Big Thing’. It has a sly habit of developing rapidly, but never making leaps so big we collectively stop to think about where it’s all headed. Like a toad in a slowly heating bath of water, we might not know we are being boiled until it is too late. In some areas, like medicine, tough codes of ethics force researchers to think before they innovate, but outside this bubble of hyper-self-awareness, the advent of new technologies themselves sets the pace of public debate. This afterthought approach leads us into hot water, and as with most of humanity’s inventions, misuse of new technology inevitably leads to conflict of some sort. To avoid these problems, thought must be put into the legal status of these developments long before we are buying them off the shelves.But how do we go about drafting clear legal frameworks that can handle innovation without being outpaced by it? The problem seems to be one of predicting the future, the unenviable task of foreseeing all inventions in the next hundred years. Fortunately this isn’t the case. As new technologies like drones and driverless cars appear, and old fields like space exploration develop, it may not be possible to guess at every individual breakthrough ahead of time, but most certainly the key ones should be dealt with. A 2016 New Scientist article on trans-cranial direct current stimulation, a method of activating certain brain regions with electricity, mentioned the ethics of using it as a performance enhancer for sport. It shied away from the issue, calling it “a debate for another day”. This approach makes it too easy to delay the conversation indefinitely until something goes seriously wrong.
Our current struggle with climate change is a prime example. So why are we so bad at preparing for the future? According to researchers at Stanford and Princeton, it may be because we view our future selves in a similar way as we do strangers. Combine that with a strongly evolved tribal instinct and it is not hard to see why we continue to struggle when it comes to planning ahead. We are all aware of how easy it is to put off important work, when we know hindsight will show us it should have been done earlier. Does this make it any easier to empathise with your future self the next time it happens? The answer is a resounding no, and this is where problems surface.
Space exploration is one of these technologies for which we struggle to plan. One of its greatest prospects is the extraction of metallic and organic resources from asteroids. Based on current estimates, there is enough mineral wealth in the solar system to supply our current requirements for tens of millions of years. Yet the most wide-reaching document we currently have, the United Nations Outer Space Treaty, is decidedly unconstructive. One of the declarations that lays its groundwork explicitly states “no one nation may claim ownership of ... any celestial body”. It is difficult to see how any kind of commercial expansion into outer space can happen without the basic property rights this declaration bans. Given that the lead in this are is currently being taken by private enterprises like Blue Horizon and Virgin Galactic, the existing laws may well prove a chokehold on development until they are revised or simply ignored.
Luxembourg and the United States have already begun this process, with President Obama signing the Commercial Space Launch Competitiveness Act last year. In direct contradiction of the United Nations Declaration, this gives U.S. firms rights to “possess, own, transport, use, and sell” any asteroid resources they can extract. If these opposing views came to court over a real issue, a lengthy legal battle would no doubt ensue. If we’re able to settle these matters before the necessary technological advances are mad, this simply won’t be an issue. Meanwhile Luxembourg has declared significant state sponsored benefits for companies in the space industry, including a 45% rebate on any research and development expenses. While the space tourism industry will no doubt be lucrative, no business is going to commit itself to the outlay required to prepare an asteroid for mining if there is no guarantee of ownership.
To counter this confusion the International Institute of Space Law, a global consortium of experts from space-faring nations, has clarified its position on the subject of property. While it does not see the United Nation’s position as a ban on private acquisition of space resources, it has called for its thorough reworking in search of “legal certainty in the near future”.
Closer to home, the increasing numbers of civilian drones and our steady progress towards the driverless car present more immediate problems. The first remote controlled drone saw daylight 100 years ago across the English Channel, yet even now we’re still waiting for a government strategy, promised in late 2016, to improve drone safety in the UK. There are now models on the market that can carry over ten kilograms and fly for more than thirty minutes. It takes no great leap of imagination to see that these could wreak havoc in the wrong hands.
The hobbyists’ market too represents a time bomb. In the last year there were thirty mid-air incidents involving drones, up from six in 2014. In December alone five of these were classed in the highest risk category, including near misses involving a Boeing 737 at Stansted and a Boeing 777 at Heathrow. In April, one hit a British Airways plane, albeit harmlessly. Drones currently require no training to use nor do they need to be registered, and despite the number of incidents there have only been two convictions for dangerous flying. One involved entering restricted airspace above a nuclear submarine base, the other a fly-over of Westminster.
Again, we can look to the US for a way forward. On the 21st December 2015, anticipating large numbers being given as Christmas gifts, the Federal Aviation Authority introduced a mandatory registration system for all drones, with some 300,000 accounted for by the end of January 2016. This instils some responsibility in owners, making them easier to trace in the case of an accident, or of misuse. The question remains whether this should be brought back to the point of sale, with every new owner registered when they buy to ensure no new pilots slip through the net. The regulatory issue is especially poignant for Cambridge, where Amazon opened a research centre in 2014 to explore the possibility of replacing traditional delivery systems with drones. In this instance, the UK was chosen precisely because of our lax regulations.
The legislation we have in place for driverless cars seems reasonably developed by comparison. The insurance system is willing to cover them. They can be legally tested on Britain’s roads, with one site nearby at Milton Keynes. The government is envisioning a future where driverless cars enable all those who currently don’t have a license – including the infirm, children and a third of women – to travel freely. The question of liability in the case of an accident has even been partly addressed, with a California ruling stating that the car itself can be the driver. This opens the way for companies like Google to assume liability in case of an accident, something they have always said they would do. As such, responsibility can be removed from the owner, avoiding the issue of them being blamed for a crash caused by the car.
The problem here lies more with the specifics of the algorithms the cars will use in the case of accidents. In I, Robot, the film spun out of Asimov’s groundbreaking short story collection of the same name, Will Smith’s character is saved from drowning instead of a young girl. The android that rescues him calculates his chance of surviving is 45% and the girl’s 11%, deciding on balance to save him. This is known in ethics as a trolley problem, and has been hotly debated since the late 1960s. We might imagine a child running in front of a car carrying five people. If the car swerves and crashes, it puts the occupants in danger, but if it does not the child will be hit. The algorithms used in situations like this will need to be transparently developed and relentlessly tested before driverless cars are allowed to take to the road.
Maybe it takes a pessimist to recognise the potential for danger in our creations. While affected industries may cry foul at regulatory measures if they slow progress, getting people to think about the potential impacts can only be a good thing. When the risks are at best loss of life and at worst, in the case of space exploration, international tension on a scale not experienced since the Cold War, the incentive must be found to look before we leap.
Harry Lloyd is a 3rd year Natural Scientist studying Chemistry at Emmanuel College.
Image Credit: Oran Maguire