Main

Anticipating the Future Impact of Today's Technology | Yoav Schlesinger | Talks at Google

How can you predict the unintended consequences of the products you are building today? How can you actively safeguard users, communities, societies, and your company from new future risks? As we think about the future of tech at Google, Yoav Schlesinger (Principal for the Tech and Society Solutions Lab at Omidyar Network) will showcase the Ethical Operating System (Ethical OS) and how it can help anticipate risks or scenarios before they happen. By asking the right sets of questions, EthicalOS provides a practical tool for beginning the tough conversations that ensure the tech we’re building, lives up to its promise as a force for good.

Talks at Google

4 years ago

[Music] thank you thanks everybody thanks for being here today I am talking about the ethical OS the ethical operating system or if you want to call it a guide to anticipating the future impact of today's technology we'll talk a lot more about what that means but for now let's just say that this talk is about how not to regret the things that you will build and the ways the tools the foresight the frameworks that we can bring to the process of developing new technologies to ensure that we have i
n mind from the outset the user the societies the communities that will be impacted by the products that we actually build and ultimately ship so as I already mentioned I am Yahveh if there's my twitter handle if you want to tag me I have been doing this work and thinking pretty deeply about how we build more innovative yet responsible products for a while at Omidyar Network and let me tell you a little bit about what Omidyar Network is and what the Tekken Society solutions lab is to give a litt
le context so this is my boss's boss's boss Pierre Omidyar who was the founder of eBay and from the earliest days that eBay Pierre had a vision for tech as a great distributor of opportunity in the world and a great tool for empowering people to really lift themselves up right if you think about eBay as a marketplace where everyone could be a buyer and a seller it was at the core a project in that idea of empowerment and so when ebay IPO de pierre took much of the wealthy accumulated and rolled
it into Omidyar Network as well as our sister institutions to have social impact through for and non profit investments and so for the last 14 or so years Omidyar Network has been making investments or globally in things like Financial Inclusion citizen engagement education property rights digital identity a whole change of things with this central thesis of empowering people around the world in about 2016 or so we like everyone else kind of ran into a brick wall of an awakening that sort of hap
pened I think for a lot of us around brexit and for many of us around the 2016 elections and some what happened through the 2016 elections in terms of manipulation of our democratic process etc when we all kind of collectively woke up and said oh wait a second that tech that we thought was just going to solve all of the world's problems actually has some pretty deep problems and challenges that we actually need to address as well and so we had a MIDI our network been thinking about okay we've ha
d this techno optimism for so long how do we actually ensure that the tech that we are developing is actually good for people so not just thinking about tech in its application for good but actually tech that at its core is good and so we stood up the tech and society Solutions lab which aims at our core to help technologists to mitigate to prevent to correct the downsides of technology to ensure that it can live up to its potential as that positive force for good in the world and so by we're do
ing that by co-creating and supporting the creation of solutions to advance these goals there's been a lot of talk about the problems we'll do a little bit of that in a few minutes but at the core we're focused on what are the solutions to those challenges and ethical OS is one of the tools that we've built in order to address some of those challenges so what brings us here today there's been a lot of talk about the challenges that Tech has created for people and these are just a few that are ap
pearing challenges around transparency an opacity about how decisions are made around tech and what decisions are being made beyond our control and beyond our understanding how algorithms are developed and the biases that they might be perpetuating how privacy and security are built or not in the products how text monoculture might be perpetuating many of these things ranging all the way down to environmental impacts of the tech we're building and rising social inequality and economic inequality
it's a whole range of issues that bring us to the table in thinking about how we build more responsible products so here's an analogy I like to use and this is with all credit to the folks at Santa Clara University in their tech ethics practice but foresight issues if you will ethical issues are like birds first they're everywhere ok some are ordinary some are rare some are big some are small some are local some are exotic some are ubiquitous but there are birds everywhere but it's actually rea
lly easy to go through life surrounded by them and not see them at all right they just sort of are in the background and if you're not paying attention you won't even notice the birds getting good at seeing them is a skill only built by practice and practice makes us finding them easier and actually more rewarding so when your practice at seeing the birds it becomes a daily ritual it becomes part of what you do as you navigate through the world it becomes that much more rewarding skilled Watcher
s learn how to see them and where and when they're most likely to turn up right so when you are a bird watcher it becomes oh I know that if I head to this forest I'm likely to see these types of birds right and actually certain types are found more in some landscapes than others some are not found anywhere at all right you can sort of predict with some degree of certitude about where those birds may appear and it's actually easier to spot them with help so if you have a bird-watching buddy next
to you you're doubling your capacity to actually spot the birds in on the landscape and finally you may need special lenses actually to see those birds your own eyes may limit your ability to actually spot the bird on the horizon and so a special lens like ethical OS or any other framework helps you to spot them so if you just replace the word birds with ethical issues or foresight issues I think you understand how this analogy actually plays out and so we've built the ethical OS to actually aid
in just doing this very thing and we built it alongside the Institute for the future a non-profit group impact based in Palo Alto who think deeply about these issues of foresight how we think about consequences future technologies etc and together we decided that if you can't predict the future you're in the wrong business I'm just kidding but at the core one of the things that we are interested in doing is thinking about those consequences of technology and when we think about the consequences
of human systems that are sped up amplified or disrupted by those technologies so we know actually that no one can predict what tomorrow will bring and so until we get a crystal ball app the best we can hope to do is to build the muscle of anticipating long-term social impacts which is essentially what ethical OS is meant to do and so it's actually not necessary that we predict the future it's just that we get better at practicing the skill of bird-watching right which I will now introduce you
to so ethical OS is designed to help makers of tech product managers engineers and others get in front of problems before they happen so it's actually been designed to facilitate bed or product development faster deployment and more impactful innovation so we have this thing as technologists right where we imagine that the products we build will solve all the world's problems and actually it'll all change the world for the better and that to a large extent is true right but it is also useful at
times to think of the glass as half-empty if we're always seeing the glass as half-full or entirely full we will not be spotting the risks before they happen so here are the three questions really that this is designed to though there are many more embedded if the technology you're building right now will someday be used in unexpected ways what can you do to prepare for it what new categories of risks should you be paying attention to and lastly what choices can you make that would actively safe
guard users communities society and your company in this case Google but any company from future risks so we built three different tools in the ethical OS I'll introduce you to each of those tools as a way of exercising this foresight muscle and the first is a set of scenarios to consider that just sort of start pumping the juices think of it as a warm-up for your run right this is just getting the blood flowing thinking about the potential risks of technologies with some specific city excuse-me
scenarios so you know some questions to ask about each of these scenarios is you know what's your greatest worry when you imagine this future how my different users be affected differently by this future what actions would you take if you could foresee those impacts down the road and what could we together be doing differently to prepare for that risky future so this is a little black mirror II apologies for that in fact some of the scenarios that we built in the ethical OS have now been played
out on episodes of black mirror as well as actually in the world we've seen some of them so we built a scenario around smart toilets which we actually couldn't have imagined would be demoed at CES this past January so it's the muscle of predicting some of these things is actually really useful to exercise and here are a couple of the scenarios we built so are you ready for a future where conversation BOTS have been trained to imitate specific people so not just people in general and you know an
y a generic user or a customer service representative but your grandmother or a celebrity using datasets collected from public social media posts and those BOTS are deployed across networks email and text messages and super targeted super personalized propaganda campaigns and they are as a result highly effective in changing opinions and driving action right as these messages appear to be from your family and friends what are you worried about what can we do to prevent this future how would peop
le be disproportionately impacted by a future in which this happens you know in response to growing concerns over tech addiction and anticipating government government regulation as a possible future several popular social media and game companies decide to voluntarily enforce time limits sounds like a good idea right so adults might be limited to two hours a day kids might be limited to one hour a day and actually whether a platform is limited or unlimited becomes a major selling point for them
you can imagine that world in which an unlimited platform is much hotter as a consumer good and people but some people at the same time prefer having hard limits that prevent them from becoming too addicted so again who is affected there was actually just an article in The New York Times I think last week about how in some measure personal human interaction is becoming a luxury good as those with lesser opportunities are more engaged with screens right so you can think about the disparate impac
ts of this kind of up future fortune 500 human resource departments subscribe to a smart employer service that evaluates a person's suitability for your culture and the stress associated with that culture using social media posts we already kind of do this people trolling Facebook Instagram etc to understand who the potential employee is but you can imagine being this being automated through artificial intelligence so algorithms can actually identify individuals you know suffering from mental il
lness or depression right you could have the service could actually develop symptoms of you could predict the development of mental illness in the future based on trends in the individuals postings again you can imagine the impacts that that would have on workforce on future of work on hiring on and again disparate impacts on different groups of people a major social network company in the future could purchase a top US bank and become a first of its kind Social Credit provider it could base mor
tgage loans loan approvals credit access on deep data collected from its social platform and could take into consideration credit histories of your friends and family as well as the locations you visited so imagine if you've visited a bar or a legal marijuana dispensary figuring into your Social Credit and the credit your bank might provide and it could actually also do semantic analysis of your messages and photos to actually decide whether you are generally happy angry anxious or depressed fig
ure that into whether you get a loan as well 25 percent of online orders are delivered by drone sounds great comes right to your door but actually these drones are fitted with cameras and sensors to collect data as they fly over neighborhoods which actually provides right the drone company with additional revenue for the shippers and merchants and some individuals can actually opt out for free unlimited drone delivery by paying and they consent to the collection of their data by doing that for t
he drones that fly over but actually neighborhoods where the drone delivery is legally permitted or subject to the same data collection activities even though not all of their residents or households have explicitly consented again who is impacted what could we do to prevent this future if you think about it so one of the tools that you can use to think about these impacts of a tech product you're building before it happens it's called a futures wheel and if you take one thing from this talk wel
l I hope you take a lot of things from this talk but if you take one thing from this talk the futures wheel is an incredibly powerful tool for thinking these things through so put at the center your product your change the idea you have let's go to the last example drone delivery of orders right of online orders and spin out from that the consequences that might happen first order consequences and then from each of those first order consequences think okay and then what happens and then what hap
pens and then what happens from each of these nodes on the futures we all so I'll give you just two quick examples that I easily pulled actually from the web by just googling futures we all so the first is that there are increased choices right for renewable energy which sounds like an incredibly positive development in the world right and we would all likely think well increased choices for renewable energy as positive as a positive development so put that in the center of your futures wheel an
d then spin out the first-order consequences of that right so you can imagine new companies emerging in the energy marketplace oil prices falling monopolies being dismantled and technologies being developed you know based on geography and politics again those all sound really positive as first-order consequences but when you spin some of those out you actually get to some negative consequences now on the second-order right so if oil prices fall jobs can disappear in the oil industry and some poo
r countries can actually become oil dependent right because oil is now cheaper so poorer countries can access more oil rather than renewables and they become oil dependent as a result right and actually you can imagine also on loss of monopolies that actually war and poverty could result for countries who are not competitive in the marketplace right so it takes this tool of figuring out first order then second order third order consequences an order sometimes to see the negative impacts so again
we imagine renewables good first order good when we get to second and third order often times on the horizon you can see potential risks out ahead here's another one sorry if this is illegible again I just pulled it from the web at the center of this futures wheel is open source technology and generally you know popular sentiment is that open source technology is a net good it's positive development etc I'm not going to go through this entire futures wheel but I just wanted to highlight a few t
hings further out on the degrees of consequence and impacts here so if you get out far enough you have you know untraceable terrorism you have companies failing you have you know loss of jobs you have lack of attribution of product of them and you have some potential negative impacts from open source technology that might not have been expected right at the outset so again just a useful tool and starting to think through future impacts second tool risk scanning so we created eight risk zones to
look at when thinking about your products and I'm gonna go through each of these in some detail but just to highlight them here risk zone one truth disinformation and propaganda to addiction and the dopamine economy three economic and asset inequalities for machine ethics and algorithmic bias five the surveillance state six data control and monetization seven implicit trust and user understanding and eight hateful and criminal actors so by dividing into these eight risk zones we could sort of id
entify spaces where hard to anticipate and unwelcome consequences are most likely to emerge say over the next ten years now I want to stipulate we did not include every area where things might go wrong right so you'll notice here environmental impacts are not included again this was meant to be the intersection of both on hard to anticipate and unwelcome we actually know that environmental and impacts are likely to come so that's not hard to anticipate these risk zones are really just meant to r
epresent those hard to anticipate areas so let's talk about risk zone one truth information and propaganda and in this future shared facts are on attack and the primary mode of attacking today these kinds of shared facts has been fake video alright so this is a signal that we see on the landscape that says this is a potential risk zone in the in the near term right so everything from fake news to BOTS that spread propaganda and deep fakes which are highly convincing video that's algorithmically
altered that replaces people's speech and facial expressions and identities creates fake proof of actions or speech that actually never happened right so individuals today are highly motivated to subvert the truth at massive scale especially for political ends and new technologies will make it easier to spread lies and undermine trust so over the next decade what else could be faked via new technologies so if this is the risk zone you're scanning for what questions should you be asking first wha
t type of data to users expect you to accurately share measure or collect - how could bad actors use your tech to subvert or attack the truth what could become the equivalent of fake news or deep fakes on your platform how could someone use your technology to undermine trust in social institutions and lastly if you can imagine the form that that misinformation might take on your platform even if your tech is apolitical how could it be politicized in some way to destabilize a government a communi
ty a regime etc so this is just again a checklist thinking about your product questions to ask and risk zone number one risk so number two addiction in the dopamine economy so in this risk zone we all have sort of come to understand that time spent online maximizes profits over well-being and you can see that in things like research by Common Sense Media which shows that actually now the average teen major spends nine hours a day using some form of social media and actually at the same time stud
ies show that people achieve their maximal intended use of an app like Instagram or snapchat after eleven minutes okay and actually after eleven minutes overall happiness decreases so how can you actually design tools to advocate for user happiness offline and online over keeping eyes glued to the screen right and so the questions in risk zone number two that you probably want to be asking or does your business model you maximize user attention and engagement you know essentially the more the be
tter and if so is that good for people you know what is extreme use look like and how would you know moderate versus extreme use how could you design a system that actually Courage's moderate use and lastly is there potential actually for negative material for toxic material to actually increase and drive levels of engagement the maximize time spent on the platform for questions to ask around addiction the dopamine economy I'm gonna roll through these a little more quickly now for each of these
just so that I can introduce all of the eight risk zones so economic and asset inequalities right where new technology can democratize access but it can obviously also exacerbate inequality in this world you can get cheaper insurance by being white right in 2017 Oxfam international showed that eight people owned as much wealth as the entire bottom half of the world's population right so wealth constant concentration distribution is an issue and this new technology can provide income opportunity
and balance distribution but it all can also caterer only two high income groups and eliminate low income jobs so what questions should be asking which groups are disproportionately impacted how might workers be impacted on your platform by the virtue of the type of form or contract that they are given contractors versus full-time employees how might communities with fewer resources actually not be able to access your platform or be have too much access to your platform right to think about how
distribution of wealth is impacting both the haves and the have-nots in risk zone four we're looking at machine ethics and algorithmic bias right so in this world human bias is amplified through artificial intelligence as in the case recently where Amazon scrapped its H our recruiting tool that actually said if you had the word woman appearing anywhere on your resume you were actually think it was 27 times less likely to receive an interview than if that word did not appear as a keyword on the r
esume yeah I trained itself but actually you know the application of AI in critical domains like welfare and education employment and critical justice criminal justice has intensified right so the idea that technology is neutral and that this is not a product of human action is no longer really acceptable because we actually sort of know that human decisions are being used to make the models human decisions are using to categorize the data human decisions are being used and the algorithm and you
can't blame the algorithm any longer so at the core of the questions you want to be asking are you know do you use deep data and machine learning and are there gaps in the data that historical biases might actually be biasing the technology so for example the compass criminal justice profiling system was sentencing black defendants far more frequently than white defendants into much more severe sentences because it was using a data set that showed you know that black defendants were more likely
to commit crimes again in the future well it turns out that if you're using his store achill data about crime and it's incidents right and you're using that to predict future you're actually just reinforcing the bias of what has happened historically as opposed to countering that bias by understanding that actually past is not future does that make sense right like black defendants yes have typically had higher recidivism rates because of historical deeply entrenched economic and socio-politica
l inequality right but if you use that to predict future and sentence people accordingly you're just reinforcing that bias have you seen instances of that bias actually entered your products algorithms is it actually amplifying that bias and who's responsible for it do you have a diverse team that can spot those risks early enough to understand that you may be perpetuating those biases and actually how will you push back against the blind preference for AI develop systems and do you have transpa
rency into that system and is there actual recourse for people who feel like they've been negatively impacted in Rix's owned v we've got a surveillance state set of issues where surveillance tools and facial recognition right today are empowering the powerful at the expense of the powerless so recent examples of social BOTS being co-opted by governments and militaries for use in attacking their opposition armies of automated software driven profiles are used to target journalists and activists a
nd citizens using Western surveillance tools so in this risk zone you need to be asking about your product how might a government or military body use this technology to increase their capacities of surveil their citizens who besides the government in the military might actually have access to those tools and want to increase the surveillance of its citizens as well who would they track and why are you creating data right that actually could follow people throughout their lifetimes and actually
well the data your tech is generating have long-term consequences for the freedom of rep of those individuals and who would you not want to use your data sir to surveil and what can you do to proactively protect that data from being accessible to those malicious actors who might not want to have access to it in risk zone 6 we're talking about data control and monetization where users actually left the control to share monetize and benefit from their data alongside the tech companies that create
and use it right so we've seen this actually and we understand that users expect access to tools now for acquiring sharing interpreting and verifying their information that's been collected about them and we expect that there's will be an increasing level of agency around data from users in the future so if you're creating a product that collects data and that's just about every product these days what are the questions you want to be asking what data are you actually collecting do you need to c
ollect it or would you just like to collect it are you selling it and if you're selling it to who and how am i they go about using it do your users have the right to access the data you're collecting on them and could you build a way into your platform or product to give them the right to capture share or monetize their data independently outside from you what could bad actors do with this data if they had access to it what could a government do with the data if they were granted access to it th
ese are actually questions that we've seen come along on the landscape recently particularly with access to people's phones or other types of requests from the government for people's data in Risto 7 we're talking about implicit trust and user understanding where misuse of data is a serious problem actually because users don't trust companies with opaque Terms of Service and the inability for users to actually understand how a popular app or platform is working how their engagement is optimised
what's being tracked and collected it's really hard for users to have clear on what are the terms and companies can expect backlash actually from product you know from the users of their products and from employees actually when those Terms of Service are violated as in the case here with uber so the questions to ask does the technology you're building have a clear coat of rights and are your Terms of Service easy to read access and understand is there a version of your product that's available
to users if they don't want to sign the User Agreement and could you imagine building a product that would do that does your technology actually do anything you don't even know about it's hard to imagine but this is actually the case and actually if users object to the idea of their actions being monetized is it possible to create a sustainable model that builds trust with them and our all users treated equally so could you handle consumer demands or government regulations that require all users
to be traded equally or at least transparently could your product or platform actually do that and lastly miss donate hateful and criminal actors we're online tools enabled global dissemination of terrorism hate bullying radicalization trolling doxxing and much more you know we saw this very unfortunately most recently with the massacre in New Zealand where social media platforms were used to disseminate terrorist activities and use to radicalize the shooter in the first place so using platform
s to actually perpetuate these kinds of behaviors globally and there was a responsibility as foresight tool to be thinking about those possibilities before they happen so how could someone use your technology to bully stalk or harass people you know what kinds of ransomware theft financial crimes or fraud could come around by use of your platform do you as a technology maker have an ethical responsibility to make it for bad actors to act to make it harder for bad actors to act on your platform h
ow could organize groups use your technology to spread hate or recruit others and lastly what are the risks of your technology being weaponized what responsibilities do you have to prevent it so if you go through those eight risk zones and there's a lot more embedded in each of them you can download this full toolkit from ethical OS org there's a lot more embedded in each of those risk zones but if you go through that here's you know three things to do share it with your team go through this as
an exercise with your team go through the risk zones understand the impact of your technology at your team level you could consider adding some of those top questions to your product requirements document or any other document that you set out at the beginning of the development cycle to think about which questions you should be asking all along the way as you develop and before you ship and lastly you should scan the horizon actually on as an ongoing exercise for additional information about th
e risk zones and if you're doing that you're doing the work of the ethical OS and thinking about the consequences of the tech before you ship the last tool we built in ethical OS is actually just some ideas for how to do some alternative ways of future proofing so thinking about best practices that could help the tech community community mitigate risk at scale so these are some ideas for industry-wide efforts to create products that you know have a company in humanity's best interests in mind so
these are just kind of pie-in-the-sky ideas there's a lot more actually in the ethical OS but here are a few to consider so one is you know a Hippocratic oath for data workers so you could imagine a world where anyone working with data took an oath to protect the data of its users so if you were gonna develop such a thing what commitments would it include what rituals would be associated with taking the oath in the first place it's actually a beautiful ceremony for civil engineers in Canada cal
led the order of the iron ring and when you become a civil engineer in Canada you're presented a ring that is a pocket we made from the metal of a collapsed bridge and you're meant to wear the ring on your right hand your drafting hand so that all along the way through your work as a civil engineer you keep in mind the ethical responsibilities to the people who will be crossing your bridges or living in your buildings and so you can imagine building rituals like this for a Hippocratic oath for d
ata as scientists or data workers more generally second tech companies have bounties for bugs if you find a bug for Microsoft they will pay you to have found that bug could you imagine actually building ethical bounties as well for identifying risks in these risk zones or any others how would somebody claim that bounty what would what should they be paid and actually who in the company would be responsible for assessing whether that risk identified was actually a risk that needed to be mitigated
in the first place you could imagine also a list for employees within a company of red flags that absolutely required running up the flagpole or pulling the metaphorical red handle so what could go on that list who would create it and update it and would it be shared across a company would it be public would it be proprietary would each company have it you know these are questions to think about as we think about resilience building actually as a system of companies and product makers and devel
opers etc the last idea we had was just you know creating an actual licensure for technologists so you know you could as lawyers and doctors and a whole bunch of other professions have license to operate and actually if you behave unethically you can be stripped of that license and no longer be able to access the privileges of that profession could you develop similar thing for technologists and if you did what kinds of things would actually cause you to lose a license right medical malpractices
medical malpractice causes you to lose a license if you're a doctor what would cause you to lose your license as a technologist so all of this to wrap up is a way of thinking about what's hot and what's not and thanks with credit to my friend Jake Metcalf for this for this list so what's hot show us your work show us how you got to this decision with transparency and what's not as follow our principles blindly right what's hot is global adaptability and interoperability of platforms as opposed
to adherence to Western can and only as thinking about ethics and instead of thinking about Western Canon of ethics thinking about societal impact more broadly what's hot is defining the due diligence the process of discernment the process of thinking through those consequences first second and third-order what's not is an inflexible framework that doesn't allow you to spot those risks in the first place not hodda's close community thinking about these things in silos what is hot is open source
open dialog tools that enable us all to get better at this in the first place hot ethics is a process it is actually labor it requires sustained effort to identify track mitigate and improve the consequences for the things that we value as human beings and it is not PR it actually matters because it impacts people in their lives and lastly what is hot is actually you know ethics doing this work as assignable tasks right you can be the owner of this on your team it can be parceled out by a produc
t manager by a team manager etc with responsibilities for different people to spot and bring these risks to the table and what is not hot about doing this is where ethics is a black hole expense where there's no accountability built into the framework finally one last thing I'm gonna make a plug for a new podcast which if you haven't listened to it you should it's hosted by Caterina fake who was the co-founder of Flickr and isn't that since become a venture capitalist she has a new podcast calle
d should this exists which I asked a lot of these same questions of early-stage entrepreneurs and inventors and startup founders or she brings them on with a new product idea onto her show and ask these questions is the product you're building fundamentally good for people what negative secuence is an unintended impacts might you imagine about this product how can you mitigate them have you thought about all of these first second third order consequences so that's at the core the ethical OS it's
a way of bringing foresight about unintended consequences or unanticipated consequences or consequences actually that you probably could have foreseen if you'd asked the right sets of questions in the first place to the activity of building something new and to the activity of adding a feature to the activity of building new code to the activity of at fundamentally building things that we imagine will be good for people but we might be not imagining the ways that it might be bad as well to put
that glass half empty so with that I will take it to questions so thank you so much for coming this has been such an enlightening talk I noticed that you it's this product seems to generally be based for futures events are future occurrences could you reapply these to current companies now and kind of revamp there's way their systems flowing now do you think yeah I think absolutely I think at the core these questions are meant to identify negative impacts right and you could absolutely be asking
the same set of questions about a current existing product how is it doing this now are we building a product that currently is being used in this way and how might we actually just mitigate the current risks not just anticipate those future risks so I think it absolutely has application for current products and again most products living in the world today go through multiple iterations there's very few products that ship and are done and so with each iteration with each new feature with each
tweak to the code with each tweak to the platform as we go through the evolution of a product - it's also a useful tool even along the way there I would say that it's not meant and this is not meant to be a one and done activity it's not meant to just be asked one time and then say ok well we asked the set of questions and now we don't have to worry about it but building that process of discernment that all along the way we're asking these questions with frequency that we're able to spot and and
great if we answer no nothing's changed and that risk that we anticipated has is no longer an issue and it continues and long no longer be an issue but at the point which becomes an issue again let's surface it and let's think about ways to change the behavior change the product change the way we're doing business so do you know of any companies that have already started this anyone reported back to you saying yes we've had a week-long meeting to go through all of these steps we do know compani
es that are using this and we have actually also had some degree of success embedding this in incubators and accelerators as well so we're able to sort of get a class of early-stage founders or startups thinking about these issues from the outset I can't speak specifically about any particular companies a lot of companies don't necessarily want to be public about their use of tools like this I think that's becoming less the case as more companies are embracing their responsibilities and their et
hics as someone I heard recently said ethics is now the hottest product in Silicon Valley so I think a lot of people are actually starting to be proud of their use of tools like this and not shy away from it but yeah it's this is definitely in process and if not this other frameworks for having these conversations right there's a design agency in Seattle called artifact group which design tarot cards of tech which is a card deck which actually asks a lot of the same questions and so really just
thinking about what are the tools for asking these questions and it doesn't need to be ethical OS these questions could also just be asked by anybody on the team who's empowered to do so any team leader any manager anybody at any level should be able to ask these questions whether through the ethical OS or just because they care so ethics is not an exact science how do you decide what is termed ethical and not ethical largely opinions yeah I get that question a lot so if it's not helpful to use
the term ethics I say do away with the term ethics because at the core which I hope came through in this presentation when we're talking about ethics I'm talking less about sort of structured and rigorous frameworks for Western ethics right deontological versus consequentialist versus utilitarian frameworks dissecting converses mill and really right I don't find that exercise particularly helpful actually I find the exercise of thinking about ethics as this responsibility for mitigating harm as
the core activity of of doing ethical thinking right so in that case I think that that issue of ethical relativism becomes less of an issue because really if we're asking these specific questions who is negatively impacted by the tech and you don't need to care you could say right we know that some people are being negatively impacted by this and and we're okay with that that's fine the only thing I think I'm advocating through the use of a tool like this is go through the process ask the questi
ons answer them thoughtfully discern the potential outputs outcomes impacts and then say how do we trade off our values against those potential futures and those values get to be defined by you by your team by your company follow your values those don't need to be defined by anyone else those you get to define and then make the trade-offs once you've asked the questions between the impacts you might that might occur or the ways your tech is behaving or the ways that users are using your product
etc and say how does how do we square our values with those impacts what what comes next I think for OMA dr and beyond the ethical os toolkit can you have any comments about what what you guys have in the pipe yeah so I think at the core there's a a need more broadly for ethical infrastructure if you will that actually supports this activity in the first place if you plant a healthy tree in a withering orchard it's unlikely that that healthy tree will resurrect all of the dead trees in the orcha
rd and it's unlikely that that healthy tree will flourish if the soil is poisonous right so building ethical infrastructure and ethical culture is a necessary piece of the puzzle that supports the use of a thing like ethical Oh asset scale right so I think one of the things we're trying to puzzle through is how do you actually build the kinds of things that scaffold ethics all along the way right so you've got training of future technologists and how are they exposed to the behaviors and mindset
s of asking these questions actually all along the way as they learn the activity of coding if they're computer scientists or engineers how our employees on-boarded and trained once they enter a company to think through these issues and then what kinds of supports exists for them to raise red flags are their chief ethics officers or ethical ombudsman that support that activity how do venture capitalists funders institutional or otherwise set KPIs and metrics for companies that actually support t
heir activity of doing ethics better and having more mission alignment companies in the first place rather than just incentivizing growth over all else what kinds of consumer awareness needs to be built in order for consumers to have more active voice in the kinds of products they want to use our employee voice is harnessed on the issues that employees care about we've seen a tremendous amount of employee activism recently in companies large and small about the kinds of it ethical issues that th
ey see right so all of that speaks to kind of ethical culture and ethical in for structure and we're thinking about that broad echo system so not just tools like ethical OS which is a really important piece and tools are absolutely necessary but also what is what are the supporting mechanisms to enable that behavior it just doesn't follow up to that question regarding the infrastructure needed I guess like here in the US there's probably a lot of skepticism with regard to the ability of maybe li
ke our federal government to be able to provide that sort of infrastructure so to what extent are you optimistic regarding our ability to actually put those mechanisms in place and if that doesn't happen then to what extent are you optimistic that the market or you know some other actors can do it and that in the absence of that I am not necessarily optimistic about our government catching up I mean if you if you see the recent tech hearings unfortunately our members of Congress do not demonstra
te a extremely high level of proficiency or fluency around products and the way they are built and their impacts on people and yet we are seeing this really interesting trend of now some large companies actually asking for regulation of certain things now there are lots of reasons they might be doing that but I think there is a growing awareness that regulation will probably be necessary and I am NOT bullish on the idea that that will catch up very quickly knowing some unfortunately the dysfunct
ion of our current government and the system of passing legislation I think at the local level it's actually probably more likely at the city and state level we might see more interesting regulation coming out especially from California and a few other places that are thinking about these things if at the core your question is about so then to what degree will companies self-regulate as it's called right in order to do this better if the government can't actually assert itself in terms of legisl
ation and regulation of these things I think there is a competitive advantage to be built from doing this particularly well and building user trust around security and data about to thinking responsibly about the products that are built I often times come back to the example in the automobile industry of Volvo which created the safe you know the safest car on the road and every person who was going to put their teenager into a car knew to get them a Volvo because it was destined to crash at some
point and you just wanted to protect your kid right and so there was actually competitive advantage to be built around being the safest car on the road and so I think that we I am perhaps naively optimistic that we may be moving in a direction for the tech industry as well where there is a competitive advantage to be built based on doing this well and in that case I think that we may see a race to the top for it again that might be naive but I you asked me to my degree of optimism and so I will
say that I am naively optimistic about it thank you very much you offer for coming today to speak and thank you for your time and for taking all these questions thanks for having me appreciate it [Applause] you

Comments

@robert_trumpeteer

Yeah if Google starts to pay their taxes, then I would watch this haha #taxevaders #panamapapers

@ErnestOfGaia

Thought he was gonna talk about EOS.io