You Got This!

The What, How, and Why of a Code of Ethics

Transcript

Summary

The Association of Computing Machinery (ACM) code was updated in July 2018 with the participative effort of experts, members and the public.

Why is it relevant?

There are lots of ethical questions that are still unsolved. People are sceptical about new technologies. While the initial code focused on professionalism and following laws, the new code is not just a place to come to find answers but also a place to find ways of thinking through potential issues that could come up with the technologies being developed.

The first principle of code states, “A computing professional should contribute to society and human well-being acknowledging all people are stakeholders in computing. Another phrase reads, "Where interests of multiple groups conflict, the needs of the less advantaged should be given increased attention and priority.” Some major changes and additions include:

  • 1.4 Be fair and take action not to discriminate
  • 1.5 Respect the work required to produce new ideas, inventions, creative works and computing artefacts
  • 2.9 Design and implement systems that are robustly and usable secure
  • 3.6 Use care when modifying or retiring systems
  • 3.7 Recognize and take special care of systems that become integrated into the infrastructure of society

Responsible Innovation must:

  • Anticipate the potential impact of your technology

  • Reflect on the ethical and social issues your tech may raise (using the Code!)

  • Engage with relevant stakeholders to help identify potential issues/mitigate them

  • Act by putting methods in place to ensure issues are resolved

I'm Catherine Flick, I'm a Reader in Computing and Social Responsibility, which basically means I do technology ethics. That is my bread and butter. I've been doing it since my PhD was published in 2009, so I've been doing it for well over a decade. I was spending some time doing a PhD before it is finished, so probably about 15 years or so now.

My specific areas of interest are emerging technologies and responsible innovation, and I will be talking a little bit about both of those things throughout this talk. Now, I was invited to give a talk about the why, what, and how the code of ethics primarily balls I was invited by the ACM - the Association of Computing Machinery - which you may have heard of, to help them update their Code of Ethics. This is 2018, 2017/18.

I will get on to why we were doing this a little bit, but that is - this is basically what this talk is going to be about. I was invited because of the work I was doing on emerging technologies, I was invited because they didn't have any non-Americans on the team, so it was a very American-centric group of people who were trying to create a code of ethics, worldwide global in its reach, and while we had an extensive extended team, and we also had many, many consultations, quite a participative event, really, for the entirety of the ACM, and we had over 4,000 people respond to a survey, for example, from ACM to a qualitative survey, and I had to do the qualitative analysis for that, you know, I was kind of the key non-American perspective when it came to actually putting together the final bits of the code.

So, like I said, we updated the code in 2018. The previous version was from 1992. Now, I don't remember - I don't know if you were maybe even born some of you by 1992, but it's a little bit different now in the space that we're talking about in terms of what the computing profession is, what sorts of things it encompasses, the internet wasn't a widespread thing in 1992. You know, there was a lot of stuff that we do now that hadn't even really been thought of. Surprisingly, if you go back to the 1992 Code of Ethics, it's surprisingly still stands up for the most part, although security is talked about mainly in terms of physical security, which is an interesting kind of little quirk of the time.

Some of you might be ACM members. I think it is really important you understand what you've signed up if you're an ACM member. If you've signed up as an ACM member - which I'm not, I don't care if you're an ACM member or not - but if you're an ACM member, you should care about the code. It's one of the things that defines you about as a computing professional, you can fall back to it, I can talk about what teeth it has at the end, but it is really something that you can lean on, you can look to, and you can get guidance in your professional day-to-day life through the examples that we have, the case studies, and you can talk to ethicists within the ACM. We have a whole group of us that really are available for questions, for things to talk about in terms of the ACM code.

Also, there's probably another few reasons why we should probably be talking about codes of ethics, and that is really why is it relevant? We have had codes of ethics for a really long time. It is only really been the last five or so years that big companies are starting to actually think about ethics, because they've realised just what a complete monster they've unleashed in the world and realised uh-oh, we need to rein this stuff in. I'm not going into the specifics whether they're doing a good job or not - that's outside the scope of this talk - but the big people, the big companies are working on it, maybe you should be as well. There are loads of issues that even researchers haven't kind of sorted out yet, let alone implementations into the community, and into society.

There are lots of ethical questions that are still unsolved or unmitigated within implementation land, I suppose you could call it. The Financial Times, which is a fairly conservative bellwether of industry stuff, suggests that you can't ignore it, and I would probably think that, you know, I guess given my job, I would suggest that you probably should not ignore it, either, but really there is an emphasis on making sure that you are not just creating things, but you're creating things that actually contribute to society, because it could cost you in the end. I love the fact that the FT looks about at the potential negative impact as being costly as opposed to potential positive impact which could be good for society, but that's another talk as well.

Finally, Raconteur, which is a business decision-making magazine, talks about how to balance ethics during the tech transformation, and the fact that you're moving on in different ways and creating new innovation chains, you need to make sure it's not just regulation you're looking at, not just the law, but you need to look at the potential social impact.

Basically, the public is wising up, you get a lot more ethical consumerism. People are sceptical about new technologies. It is time, if you're not doing it, although you're at this talk so you probably are - you're doing the right thing if you're doing this sort of stuff. So I think it's important to think about one of the key aspects that we changed in the Code of Ethics was the focus of it. So originally in the original code, it was all about professionalism, and what it was to be a good professional, and it was all about following the rules, and following laws, and really very, very rigid rule-bound kind of stuff Thou Shalt Not, this.

The new code is not only answering questions but about raising questions. It is not a place to come to find answers but on a place to find ways of thinking through potential issues that could come up with the technologies that you're developing, and this is one of the key, key aspects of this Code of Ethics, is this opening statement which I will just read in its verbatim: "Computing professionals' actions change the world. To act responsibly, they should reflect on the wider impacts of their work consistently supporting the public good. The ACM Code of Ethics and professional conduct expresses the conscience of the profession."

Then the very first principle is that, "A computing professional should contribute to society and human well-being acknowledging all people are stakeholders in computing." This is so important because the fact that we want everything, if you ever have problems with resolving issues, or you have conflicting values that you need to resolve, that the key things, the paramount principle is the public good. So, you know, if you're struggling with how to resolve issues, come back to this public good issue. And one of the things that we have in this particular principle is it also includes wider stakeholders, you might want to call them, so even the environment, the environment of sustainability is included here, we have significant amounts of, like, there's a real focus on diversity and diverse voices, social responsibility, that's come out of the corporate social responsibility movements, and things like that, and accessibility are all part of this focus on the public good.

There is a really good phrase within the Code of Ethics where you say, "Where interests of multiple groups conflict, the needs of the less advantaged should be given increased attention and priority," so it should not be about the bottom line so much, but what is it that is good for the marginalised people that are going to be affected by this technology? And there is still where a lot of big companies are having their problems. I think in some ways, one of the ways that, like, I think really code captures the Zeitgeist of the time is the changes and conditions we have created. You're probably not familiar with the Code of Ethics, and I'm not assuming you are at all, but I want to pup out a couple of the main things we changed and to have an explanation of why we changed it, and what they add to a code of ethics for computing professionals.

Our most controversial new addition, or a bit of a change, a lot of it got mangled along the way, but it's a new thing really, was 1.4, which is "be fair and take action not to discriminate", and the reason why this was - yes, less well accepted by a very loud but tiny minority of ACM members and the general public was because of their concerns about social justice warrior, diversity - stuff like that. They were just really angry that we were worried about diversity. Now, technology has had diversity issues for a significantly long time - ever since it really started. It's highly discriminatory, still even though we've been trying to work on this for so long. This is the most - like we had such ... this really was the most loudly, like, just angry minority one. We just didn't take - we paid attention to it, and then we dismissed it, because this is about progress, and we can't have technology that is mired in the issues that we know exist in terms of discrimination, and lack of diversity in technology. I will get on to that a little bit as well about the diversity side of things.

So, 1.5 was also a big change. That's one of the ones I'm most proud of, because it used to be "respect copyright", the end. Since 1992, the discussion has come a long, long way in terms of how do we deal with crediting work? And we have had lots of open source movements have come up, we've got creative commons spaces, we've got all kinds of collaborative methods of working now, and there is an increased understanding about things like fair use, and what should be available to the public. So, this was all, used to be all about respecting - I mean, it's much more nuanced these days, and so we now try to recognise other ways to credit work, and the places where the public good takes precedence over legal requirements, so there is a clause in there that says basically, if you think this is in the public interest, then you should be aware there could be legal proceedings taken against you by ethically - ethically you're in the right to do things.

2.9 was really this sort of "security by design", what we wanted captured here, and back in the previous code, it was all about physical security - like literally it was lock doors, and things like that, store data in locked filing cabinets! These days, obviously, things have changed slightly, and really we now know that security not only has to be good in terms of its security, but it needs to be usable, and there is a really good quote from the Code itself that says, and we talk about misuse as well, so, it is not just about, about things being, like that are secure, but they need to be robust, they need to be updated, there needs to be protection against misuse, and you have responsibility to do that as the technology designer implemented that sort of person. We have a particular thing about misuse here, "In case where misuse or harm are predictable or unavoidable, the best option may be not to implement it," and that is something very new within the Code. We say if you can't deal with the security ramifications or it's going to harm people, just don't do it. This is the classic kind of Jurassic Park quote, right? Malcolm saying, "Just because you can do it doesn't mean you should", right? That's what we want to capture there.

3.6 is, "Use care modifying or retiring systems." This is a brand new one where we wanted to deal with the like the Windows effect, like Google who tend to retire such without really thinking necessarily about the knock-on effects of that. I guess, the Windows 7 debate was the big one that we really wanted to deal with here, and it requires, if you're going to be retiring things, you need to migrate users very carefully, and you need to provide viable alternatives to removing support for legacy systems. So that is one of the key things that we've added in there.

Finally, the one that I think is really, really important, especially in the age of social media is this 3.7 which is recognise and take special care of systems that become integrated into the infrastructure of society. Really, these systems that now, like the Googles, and Facebooks, and Apples, and of the world, the Amazons, they have so much say over how we live our lives. They have so much say over who gets to talk to whom, who gets included in an app store? We wanted to capture the significantly deeper ethical requirements they have to society when they are put in charge of that much control, and with great power comes great responsibility, right? This is what we are trying to capture here.

I think it's important, you know, like I've gone through a couple of the little changes and some of the things we've done, and hopefully that's whetted your appetite to read the whole code and get excited about applying it, right? How do you actually do that? One of the things I really want to re-emphasise is that code is not a sense of answers, it's a set of methods that you could use to really think through what it is that you're doing. One of the things as an ethicist, I often get called to comment on stuff when it is too late. I will be shown - I will be called up, or get an email in my inbox that says, "I have this great start-up, we've got all this money, we are going, we've got our MVP ready to go, can you give us some ethics guidance?" I will be like ... you know! You should have called me a year ago, right! Because ethics is not something - you can do some mitigation stuff, but if you're wanting to build an ethical product, it needs to be built in from the beginning. This reflection aspect that you need to do, even on the sort of very early ideas, is really important. And I will give you a framework at the very end that can kind of factor the code into it that helps you go through the whole innovation life cycle. But this, the code really should be getting you to ask questions, and which you should be thinking of honestly.

You know, it's very nice to get the blinkers on, and think yeah, my technology's going to solve this problem in the world. And it is very easy to get quite blinkered on the problem-solving, and forget about the fact that it might have potential harm, or cause problems for people who maybe aren't the key end-users that you're thinking of. So, yes, it really needs to be considered thoroughly, it needs to be considered honestly, and you need to think about it, the sorts of answers that are very specific to your technology that you're developing. You know, we can't give you that Code of Ethics. You have to come up with that yourself, but we give you a guide on how to get there.

I'm going to give you a couple of scenarios where maybe you might want to think about stuff, these sorts of issues. I'm going to see how much time I've got left. That's good. Let's say you're building some tech for vulnerable people. I've got an example here of video games for children, and some of the issues, a bit of an old screen shot, but illustrates some of the issues I'm concerned about here. Obviously, children are vulnerable. You know, they maybe should not be buying such through mobile phones potentially without their parents' permission, so if you're a video game developer aimed at children, you need to be heavily thinking about what your monetization practices are, right? So some of what you might do when you're thinking, "Oh, yes, I want to build something for children," let's go through the code. So what I do, what I want to just reinforce here, you need to be looking at this holistically. I'm going to be picking out a few bits and pieces I think are specifically important and that you should definitely focus on that I think are important, but you may find other things important.

Principle 1.2 is simple, avoid harm. Don't harm children, especially vulnerable people, right? I'm really hoping this will be fairly self-explanatory. I mean, we are keen to say in the "avoid harm", if there is a potential for harm and you're not sure if it will happen or not, try to mitigate it as much as possible. If it does occur, then you need to try to undo it if you can, undo the harm, redress it. This is particularly for emergent properties, so if you've got machine learning applications, for example, that work with vulnerable people, there maybe some potential, the good may be really, really good, but you need to just keep an eye out for potentials for harm, and then undo them or mitigate them where you can. 1.3 is about being honest and trustworthy. If you're working with vulnerable people, you may need to change the level you're communicating. An app for children, if you want to have their consent, it needs to be written about in a different way than an app for scientific data analysts, or something like that, right? So there is different language levels. You need to be up front with all of the potentials for harm, the potentials for, the capabilities, the limitations, and the potential problems that could occur as a result of using your technology.

In 2.7, we want you to foster public awareness and understanding of computing, related technologies and their consequences, and this is about education. It is really about getting to help kids and other vulnerable people maybe to allow your employees to do some volunteer work at a local Code Club, or an old people's home getting them, helping them use Skype, or something like that. Like these sorts of working with the people that you're likely to be targeting, you're going to have much more empathy and much more ability to make stuff for them if you can understand their perspectives. And so doing things like fostering public awareness is one way to do that. And then finally, one of the particular things that you should really focus on potentially is obviously security. If you're dealing with children, let's say, like I have baby monitors, I don't want internet-enabled ones personally, because their security tends to be pretty rubbish because they don't get updated, for example, you know, there are lots of issues that you can have particularly for vulnerable people if security updates, and vulnerable people tend to be bad at doing security updates as well, so there is a lot of issues that you need to really think about when you're working with vulnerable people.

Data analytics, machine learning. I think we still have this problem. We still have these problems, okay? People still release stuff that does this. It is just incredible. I should not have to give up too many examples of this, but this was a good example. It should be self-explanatory. This is my Code of Ethics booklet which if you're an ACM member in 2018, you would have got in the mail. I'm going to read out - these are all fairly self-explanatory. Don't discriminate. Respect privacy, have high-quality stuff.

I'm going to get angry about 2.5. In 2.5 we talk about give thorough comprehensive evaluations of computer systems and impacts including analysis of possible risks. I want to read this particular one, "Extraordinary care should be taken to identify to mitigate potential risks in machine learning systems, a system for which future risks cannot be reliably predicted requires frequent reassessment of risk ... or should not be deployed." If you can't monitor, if you're going to let something go in the wild and forget about it, oh, you know, the computer learns itself, that's not good enough any more. You need to be monitoring it, ready to pull the plug if it starts doing this sort of stuff, okay? I cannot emphasise. I'm getting really mad about this, but I can't emphasise this enough. It's just not good enough any more. That's my rant over!

Once again, if you run into conflicting values, the public good needs to be your central concern, not the puzzle you've solved, not the amazing piece of technology, yes, it's probably a great puzzle, a great solution to a technical problem, it doesn't - maybe it doesn't need to be in society, though, okay? So that is my little rant about machine learning. Many talks on this! Diversity stuff. I mean, this should hopefully be fairly self-explanatory as well. I've got a couple of minutes left. This is Microsoft gender and ethnicity breakdown for 2019/20. It's still not great. It is better than it was in 2018 when I first had a screen shot about this, but it's still not great. We need to fix that. There are things in the code that really emphasises the importance of doing that, and how you can potentially go about it as well, and potential issues.

FAQs: what if my boss thinks codes of ethics are for losers? You can do what a lot of employees and companies have been doing together, band together with other ACM members, push back, say my professional code of ethics doesn't let me do this. This is one of the benefits of being an ACM member, although I'm not shilling for the ACM! Google used the ACM code of ethics to push back against Project Dragonfly. There's been ownership examples where it has been used in legal situations as well, et cetera. So this is one way you can do that.

What if I'm in the military or security? So particularly this security people get upset about the "avoid harm" because sometimes it's important to harm in order to do a better, for a greater good. We have - we developed the code with that in mind. There are some very specific case studies that we have that talk through how to use the code if you're in the situation, but the idea is that a harm should be avoided, and minimised, and that is I guess the key thing, really. How is the code different from others? I mean, it is up to date, I guess, is one of the things. It has had massive participation from lots of different computing professionals from all different backgrounds, et cetera. And so it has had a huge amount of input that we have had to go through and take into account when we've developed the - we call it the conscience of the profession because we honestly think we've captured this through this participatory process.

What if I break the code? There is a whole PDF that explains the enforcement policy. Really, what the ultimate thing is that the ACM can only really police within the ACM, and the worst thing that we can do is potentially kick someone out of the ACM. If you're in some areas that is better, well, it's worse than in other areas, because some of the big conferences and journals are ACM conferences and ACM journals, but, obviously, if you don't care about that, then it is probably not such a bad punishment, but it is as much as we can do without further regulatory apparatus from government, et cetera.

My final slide, really, responsible innovation, a thing which is kind of, an umbrella term that encompasses ethics. This is a way thinking through your innovation life cycle to create technology with and for society, and you can - one of the ways that we have a framework called the Area Framework which is about anticipating the potential impact of your technology, so you can do this through various different methods. I will give you a listening at the end.

Reflect on the ... you can use the code of ethics for this, and that is one of the key aspects of this framework. Then you need to engage with relevant stakeholders to help identify potential issues and mitigate them. That helps bring in diverse perspectives which allows you to identify some potential issues quicker than if you put it out in the wild and everybody gets mad at you.

Then, finally acting by putting methods in place to ensure that issues are resolved, that your people get good at training, et cetera, and there is a whole - that is more of an infrastructural issue. This innovation framework can help you frame the innovation life cycle of your technology. I really encourage you to have a look at it.

There are some benefits ... oh, this didn't come out very well. There are some benefits to doing it. I'm sure I probably already told you all those, but, really if you want to look at more of this stuff, there is the ACM Code of Ethics here. If you want to look at the case studies, they are available on the website.

I do a podcast on video games and ethics which is not updated very often these days, but this one here, the Other Orbit is orbit-rii.org, all sorts of interesting information about responsible innovation, and do training for researchers if you're a research organisation, that sort of thing. And here's where you find me. Thank you very much. I'm really excited to answer any questions.