INSIGHTS FROM 2017 SEPTEMBER IGNITE

Posted by Adam Stead | 14-Sep-2017 17:21:47

Automation, Data, and Trust - themes which converged at this September’s IGNITE

14th SEPTEMBER - and Nimbus Ninety's office is already (mostly) clear of bits of kit packed away following another successful 'IGNITE' event. It was a tremendous conference, and we'd like to thank everybody who spoke, everybody who attended, the venue, and our sponsors. 

IGNITE events have themes. Starting with questions like, is more automation for the greater good? How can we propel innovation? To some extent, we have walked away with answers:

Robots won’t kill us, but they might be racist. Data is the most politicised single aspect of disruption. Artificial intelligence is not about cutting labour costs – it is about fundamentally changing businesses. We badly need to find new models for trust. And trust is surely the most important thing a business can build as it implements tech in the commercial space.



TRUST

In some fields, technology presents solutions for trust: distributed ledger technology can bypass its requirement in financial transactions. Automation can make a process predictable; smart contracts can mitigate risk, making long-term planning easier. And formerly longwinded trust exercises have been made slick: Apple’s new phone will use your face as a password, removing the last bit of ‘friction’ between users and apps.

But more and more, tech presents trust challenges. Unsupervised learning algorithms may come to conclusions without being able to explain how they got there. This is the heart of the ‘black box’ algorithm conundrum – a decision, even the right decision, without explanation, is unacceptable in some contexts. If it’s unexplainable, it’s unverifiable too. How can we recognise algorithmic competence if we don’t understand what it does?

Further, some algorithms incorporate bias into their judgements. Imagine a shoe, runs one explanatory video. You may have imagined a brogue, a trainer, a high-heeled shoe… but the most common type of shoe to imagine is a brogue, so when you ask your bot to find you a picture of a shoe, it finds you a brogue. Now imagine a CEO. Are you imagining a fifty-year-old white man in a suit, perchance? There’s a ‘bias in, bias out’ principle which applies to algorithms working in predictive analytics. Algorithms trialled in US courts have continued to give longer sentences to black Americans – that’s the data it’s using to make judgements, after all.

It seems likely that these algorithms are incorporating biases we don’t know are there, too. A great deal of time, thought, and research, is spent examining gender and racial bias in these systems (which we still can’t budge); there must be thousands of subtle biases which algorithms will continue to promulgate.

It feels, intuitively, as though algorithms should be fair-minded and unbiased because they lack the kinds of social intelligences/instincts which prevent humans from impartiality. Not only is this not the case, the absence of these intelligences can create further trust problems.

One is that caring is creepy. Reeling out obscure bits of information about a person to them is not a convincing way to simulate a human relationship; and people tend to dislike customer service robots that do so. This has implications for data collection which will be discussed shortly. But the move away from data-based personalisation, to concentrate more on the particulars of an interaction,  might be for the greater good.

Lack of ‘common sense’ also damages trust. Presently, algorithms are still crude. The 2010 “flash crash” is probably the most famous example of a stock algorithm going badly wrong: the writers coded a series of rules into an algorithm to sell if X if Y conditions were met; but one day some unusual circumstances came along, aggressive selling ensued; and the Dow Jones dropped by around 9%. It is fortunate that the algorithm identifying stock market violations identified the problem. (The most dysfunctional human organisations involve mismanagement and corruption across several levels; there might soon be interesting problems in the “bots managing bots” space).

The flash crash took place in 2010. Many of these technologies are older than they are presented as being (algorithms are just lists of instructions, which is timeless; but their modern use can be traced back to around the 50s). But the startling increase in the availability of data is new, and machine learning means that algorithms will become much better with that data.


COLLECTING DATA

The battle ranks of the politics of data are still forming, even as General Data Protection Regulation (GDPR) will become law in spring.

One the one hand, there are 'big data' evangelists. Imagine the potential, they say, if as much data as possible was available about everyone. Take health – surely the best advantages of ‘big data’ will come in this space – imagine if masses and masses of data was available for algorithms to rifle through, and therefore find correlations that humans wouldn’t think to test for. Think of the diseases we could identify the causes of; think of the cures we could find. The plane of collective human knowledge about drugs and biology would be lifted overnight. We can already see bots such as Watson doing superb work in this space.

There are lots of early pro-data projects as part of the initial enthusiasm for data as a force for good. Data.gov was a highly successful Obama-era project – a website designed to release as much government information as possible, for the public to use as they please. (Not coincidentally, the NSA mass data collection scandal also happened under Obama). Data.gov.uk is a similar, UK-based, website. Anybody can look, and analyse the data that the government collects for the public good. ‘Information wants to be free.’

For more commercial uses, the enthusiasts continue, what’s so bad about understanding customers?  Private information should be pseudonymised, everyone agrees there. But use of somebody’s shopping history is an excellent basis on which to predict what they might wish to buy; or analytics on somebody’s social media activity is an excellent basis on which to predict who they might wish to vote for. Companies could iterate better and more relevant products and make their marketing more accurately targeted.

Data-collection is therefore for the best. If a party is obliged to justify every piece of data they collect, they ipso facto cannot learn things from it; because they will only collect what they already know is relevant. The whole beauty of big data is that we might find trends that we would not otherwise know were there. A correlation between gene X and frequency of cancer remission; a correlation between a love of dogs and purchases of a beard comb.

The concern from this group is that GDPR could be expensive to implement and could seriously curb the advantages of big data for both businesses and consumers.

On the opposite side, the pro-privacy lobby justifiably want as little data collected from them as possible. It’s very well to imagine a world where data is a publicly shared resource; rather, what goes on today is that data which is rightfully ours has been hoarded and used for commercial gain by megacompanies, which are already extraordinarily powerful and seemingly aim for world-domination. GDPR is designed to put data back in the hands of its rightful owners, its subjects. Most users do not wish for every company to know all their most intimate secrets and to use it to personalise marketing campaigns. It is creepy; and could be used maliciously.

The crux of the argument is about consent. Everybody already agrees that companies should acquire consent before collecting data – and we will be legally obliged to under GDPR. But what should data consent look like?

An 'opt-in' approach may precipitously reduce the levels of data available. A look at the significiant difference in organ donation figures between countries with opt-in and opt-out systems can give you a sense of this(The sense that ‘signing up’ is for the public good is simply not as strong for donating data as it is with organs.) But the pre-GDPR status quo just involved asking for an obligatory tick at the end of a long T&C screen. Was that really good enough?

Even the data we choose to collect is a political question. Forms reflexively ask for things like age, race, gender, sexuality, and a series of similar questions designed to define us into a demographic block. Some of these (sexuality, religion, race) are sensitive enough to have specific provisions made under GDPR; but there’s no reason they should be the categories by which we divide people conceptually.

For our beard comb vendor, they will surely be using gender as a proxy for the most pertinent question of all: does this person have a beard? Getting their message to the right people could be the difference between life and death for this company. How do we get this information to the bots, without them sinisterly looking through our cookie history for other beard product purchases? How do we get the beard-comb users of the world to to trust vendors enough to share that information with them?

Thanks to all those that attended, and we look forward to seeing you at the next one. As ever, we left with more (but better) questions.

If you are interested in working on theses questions: be it trust, security, innovation, customer experience, or something else, Nimbus produces bespoke events of all  kinds. Whether attending a dinner with experts on a specific field, or attending a workshop on a particular subject, Nimbus is able to help

We also keep a blog and produce cutting-edge research from across several fields. If there is a topic you are interested in hearing about, or you believe you could contribute, please get in touch at editorial@nimbusninety.com.

Topics: IGNITE Summits

Written by Adam Stead

As Research & Content Producer, Adam finds and publishes up-to-date expertise regarding how disruptive technology will drive change business and life.

Leave a Comment