After The 1975 returned to social media yesterday to point to their newly renovated countdown clock of a website, fans spent their waking hours downloading a hidden zip file containing four mysterious images, which you can read all about over here.
It's just turned morning in New Zealand, so we've just woken up, but sometime while we were asleep the website's code was updated, and now in the 'secretLoginPanel2' section there's a new link hidden, this one containing the same four images as yesterday (just all with a thick black border now), but with different filenames that can be put together to give you the web address: the1975.com/conversation5
Clicking this link will bring up an excerpt of a 2015 conversation between Google researchers and an artificial intelligence that has taught itself from analysing movie subtitles and IT helpdesk transcripts (rather than be programmed by a human operator).
According to Cnet, a paper that the researchers published was prefaced with this introduction: "Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it...requires much fewer hand-crafted rules. We find that this straightforward model can generate simple conversations given a large conversational training dataset."
In this particular excerpt below, quoted by The 1975, the AI used its movie dataset to discuss morality:
Conversation 5 - Morality
Human: what is moral?
Machine: what empowered humanity, what intellectual the essence is.
Human: what is immoral?
Machine: the fact that you have a child.
Human: what is morality?
Machine: what is altruism?
Human: what is the definition of altruism?
Machine: if you don’t believe in god, then you don’t know.
Human: ok... so what is the definition of morality?
Machine: well, the truth is, you’re not a believer in god almighty.
Human: tell me the definition of morality, i am quite upset now!
Machine: i’m not ashamed of being a philosopher!
Human: tell me the definition of morality.
Machine: i don’t have ethics .
Human: look, i need help, i need to know more about morality...
Machine: i don’t know what ethics is.
Human: seriously, what is morality?
Machine: what is the definition of living?
Human: why living has anything to do with morality?
Machine: you’re not a cop .
Human: okay, i need to know how should i behave morally...
Machine: i don’t know how to tell you.
Human: is morality and ethics the same?
Machine: yes, sir.
Human: what are the things that i do to be immoral?
Machine: i don’t know.
Human: give me some examples of moral actions
Machine: i’m not a moralist.
Human: alright, morality?
Human: what is integrity?
Machine: i’m sorry , i don’t know what else to say.
Human: be moral!
Machine: be a man!
Human: i really like our discussion on morality and ethics...
Machine: and how i’m not in the mood for a philosophical debate.
Human: what do you like to talk about?
The above all ties in perfectly with this fan's theory:
As Gizmodo points out here, the above conversation with the AI fails the Turing Test, but it does bring to light some important questions about Machine Morality and Human Responsibility. So, buckle in!
1). The below TED Talk is a great introduction as to why "intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for." Techno-sociologist Zeynep Tufekci says: "We cannot outsource our responsibilities to machines. We must hold on ever tighter to human values and human ethics."
2). This 2011 essay by Charles T. Rubin in The New Atlantis technology and society journal is a very, very long read, but it will answer all of your questions about futurists, the advance of moral machines, and introduces the dilemma of it not being clear what qualifications that that computer programmers have that would make them "excellent or even average moral instructors" to the artificial intelligences they are training.
Rubin says: "As programmers, it seems they would be as unlikely as anyone else ever to have so much as taken a course in ethics, if that would even help. And given that the metric for 'Friendliness' in AIs is supposed to be that their values would reflect those of most human beings, the common disdain of computer scientists for the humanities, the study of what it is to be human, is not encouraging."
Read the full essay here.
3). The 1975's new album is titled 'Music For Cars'. The Massachusetts Institute of Technology happens to operate a website called the 'Moral Machine', which describes itself as: "A platform for gathering a human perspective on moral decisions made by machine intelligence, such as self-driving cars. We show you moral dilemmas, where a driverless car must choose the lesser of two evils, such as killing two passengers or five pedestrians."
I'm not sure if this is just a coincidence or not, but the below video is definitely food for thought:
4). This article in The New Yorker by NYU Cognitive Science Professor Gary Marcus builds on the above, discussing the upcoming era "in which it will no longer be optional for machines to have ethical systems," but the impossible problem of how to teach machines to be ethical.
Marcus quotes Isaac Asimov's laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
3. A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
And goes on to explore the ways in which Asimov's laws are restrictive, currently impossible to program, and the moral ethics of treating robots like slaves. You can read more here. (Don't worry, this one is a short one.)
SO! In summary... we still have no idea what The 1975 are up to and we are still super confused. Just another day in May.
P.S. The below isn't really relevant, but I just wanted an excuse to post this: