Bias free algorithms are easier than bias free people
Your concern with creating bias free algorithms that have been touched by human hands is certainly legitimate. I agree completely with the problem of building people free of bias. In fact I suspect that it is impossible. That is the basis of my point. I don’t think you can remove bias or even full bigotry from people permanently and, hence, human societies are always going to struggle with this. This would be very much like removing emotions from people’s actions. Even if we could we wouldn’t want to. We need anger and passion in order to have love and passion.
With machine augmentation, i.e. Machine Learning, there is strong reason to think that we can define models that are very close to bias free. This is really a two or three stage process using carefully constructed models as filters. Applying these filters to human conclusions and decisions could, then, flag biased assumptions or conclusions. This is exactly the problem that is being dealt with in judiciary use of ML predictions of recidivism in recommendations for sentencing. By using ML models and increasingly sophisticated systems to monitor and track the process of assumption we can, i’m certain, remove almost all bias from the process.
This is complex but much of the current problem has been history of allowing capitalist systems to provide these services without access to the underlying code and algorithms as well as the data bases used to construct them. I do not have a problem with using market competition to improve the services and efficiencies but the algorithms used to augment important human decisions must be, at least theoretically, open and understandable. That is not the case in the services now being used and hence the discovery of bias having been built in after the fact.
We may not be able to understand the relative weighting of vast amounts of data but the systems themselves should be able to point us a specific possible problems to check. Without getting to far into abstract design here, I think it can be done in ways that allow confidence that the system is not allowing bias into its recommendation. If that can be proven I can see a day when important decisions made by people without ML augmentation would be unacceptable and cause for immediate appeal.
The next step is post singularity, at whatever point that comes, where augmented humans are able to handle the data and filtering process to self correct. We need to prepare for that as an option.