We must put trust at the forefront of everything we build in AI and Machine Learning. We need to dedicate resources in each organization looking to deploy great data sets against AI to asking the right questions on behalf of humans from the first line of code all the way through the final interface.
Tim Woo said about AI that "the point is to free us up from the struggles of survival so we can pursue higher goals”. There is a lot of trust and a lot of responsibility in this statement.
If we set the right principles from the onset, AI will democratize high-value services - like healthcare, education, agriculture - for those who can not afford them, ultimately lifting the standard of living, globally.
But if we rush into building the technology to solve the problems without taking into consideration human trust, we will miss the point in the long-term and have to deal with major biases and consequences.
What good does it to build smartphone interfaces that can diagnose our first signs of depression and help us alleviate the struggle if we don’t trust that our data and privacy will be protected. Or even worst, if we suspect this information can be used to lower our desirability as an employee or a partner?
There is a lot of talk around jobs being taken by robots and AI.
I propose that we create jobs to manage this transition. Fei Fei Lee said about AI that "no technology is more reflective of its designers”. We need to create the role of Trust Officers who will be the owners of building the trust with the consumers. They will have the resources and autonomy to ask questions, build in fairness assurance and test for maximum human benefit.
With machine learning taking over quality assurance we will need humans to be in charge of fairness assurance. We need to start establishing definitions for fairness and metrics and processes for bias. We will need maximum diversity of background, thought, education and skillset to get these answers.