Futureheads

Dark UX: Ethics, Emotions & Cambridge Analytica

Nicole Pieri

Nicole Pieri

.(JavaScript must be enabled to view this email address)

https://twitter.com/futureheadsjobs

Dark UX has fascinated me ever since I first came across the concept, and I am becoming increasingly aware of companies employing dark UX practices to influence and sometimes confuse or mislead me as a user (have you ever tried to cancel an Amazon Prime account?).

So I was eager to go along to this month's UXPA to find out more. Hosted by the lovely folks at Foolproof, the talks focused on defining dark UX patterns, and exploring why companies use them, and how one can design to persuade and influence. Speakers also explored the ethics of such design, and where the 'line' should be drawn.

Daniel Harvey, Head of Product Design at The Dots, kicked off proceedings with a fascinating talk on Facebook, GDPR and the Cambridge Analytica scandal.  

He revealed some pretty scary statistics on the sheer volume of information many of us have given to Facebook over the years. This information comprises of an incredible 29,000 data points on each user and 1,500 data points on non-users(!). This includes information such as gender, religion, facial recognition data, IP addresses, messages, check-ins and so much more, even extending to the content of messages on user's phones. Now, technically, users agreed to share their data in this way (or, in some cases, didn't not agree to these practices), but Facebook manipulated user understanding of how their data was being used by burying relevant information in complex updates and in confusing 'opt-out-to-not-not-let-us-have-this-info' style practices.

As you may have seen, Facebook has recently shared their new 'GDPR-friendly' designs which Daniel walked us through. There is friction in the way to remove personal sensitive information. For example, to accept this update without amending your settings there’s a big bright blue button that catches your eye, but only a faint grey and white button that is easily missed that will take you to see more detail and make any changes to your settings - something Daniel described as a 'candy-coated button' that has a primary call to action inviting use but occludes a secondary action.

He also talked about the facial recognition section of the design. Although at first look it appears that they've made this section much more clear, and opt-in only, they've used some dark UX patterns to confuse the user by initially talking about 'opting in', but then asking users to 'accept and continue'.

Facebook has made it easier for users to understand what data they are sharing, and how it's being used, but they aren't designing for trust. Daniel argues that in order to effectively design for trust companies need to: 

  • Explain what data they’re collecting, why and how they profit from it 
  • Allow customers to opt-in, rather than opt-out 
  • Be prepared for and prevent attacks on data 
  • Make data literacy a priority  

Facebook's new privacy policy designs do help to improve user experience to some extent, explaining what data is collected and why, but it is not offering clear opt-in preferences. In my opinion, this in itself suggests that Facebook isn't yet putting the user first and designing for trust. But they did make some significant changes in a relatively short time frame, so I'm very interested to see how they approach the management of their user data over the coming weeks and months, to see if they can design with trust in mind.

Karen Cham, Professor of Digital Transformation Design at the University of Brighton, dived into a great talk on designing for PET (persuasion, emotion and trust). She explained that consumers make decisions based on emotions and there are ways to design carefully around this.

One key part of getting this right is applying dual process theory. There are two responses from users: implicit and explicit. The former is your reaction before you've had a chance to think about it and the latter is what users report. In order to design accurately for PET, you need to measure the reaction before the reported response. She talked about seeing people with damage to the emotional parts of the brain, despite no change to their IQ, being unable to make decisions - and it's known that consumers make 95% of their decisions at an unconscious level.

How, how do you measure what users feel before they are able to report on it? This can be measured by wiring people up to biometric tools which can give us data and therefore insights into non-conscious, instinctive reactions. The gap between thinking/feeling and saying is often where users alter their responses, knowingly or not, and a lot of traditional user research methods such as interviews are affected by this. This might be due to a miscommunication in the way users express themselves, or it could be caused by a more conscious embarrassment at their initial reaction. This is where dual process testing comes in handy - it can reveal the emotions that drive user behaviour, and in turn, enable the design of products that engage users on both an emotional and rational plane - all for the greater good. 

Sophie Freiermuth talked in more detail about the idea of designing for good and the ethics of design. She showed some code of ethics, but talked about the challenges of these codes, as everyone is different, but it's vital that designers continue to question, examine and enquire to do the best for their users.

All in all, these were super interesting talks which prompted lots of self-reflection and awareness around the level of information I have shared online - but it's reassuring to see the amazing work being done by UX practitioners that are truly user-centred. 

If you'd like to discuss the role of dark UX, or you're interested in finding out more about the UX marketplace, please say hello at nicole@wearefutureheads.co.uk.