We thought technology and AI would deliver a panacea - free from human faults and failings. But from misogynistic chatbots, to biased hiring algorithms - we are now training our tech to copy the worst elements of human nature and the results are already proving dangerous. According to experts, 85% of algorithms are flawed due to biased data and some have already wreaked havoc in society, with one in particular skewing court predictions in the US against black defendants by 2:1 compared to their white counterparts.

Should we - as many scientists have suggested - train our technology on a "perfect" synthetic data set to help rid society of injustice? Or is the idea of "perfect" data an illusion - after all, which of us could describe the perfect human? Should we accept the impossibility of unbiased data, stop trying to outsource moral decisions to our tech and agree with Thomas Kuhn when he said “The answers you get depend on the questions you ask.”?

Book Your Festival Tickets

Explore Our Speakers