Pentagon and DARPA Seek Predictive A.I. to Uncover Enemy Thoughts

By Nicholas West

I’ve recently been covering the widening use of predictive algorithms in modern-day police work, which frequently has been compared to the “pre-crime” we have seen in dystopian fiction. However, what is not being discussed as often are the many examples of how faulty this data still is.

All forms of biometrics, for example, use artificial intelligence to match identities to centralized databases. However, in the UK we saw police roll-out a test of facial recognition at a festival late last year that resulted in 35 false matches and only one accurate identification. Although this extreme inaccuracy is the worst case I’ve come across, there are many experts who are concerned with the expansion of biometrics and artificial intelligence in police work when various studies have concluded that these systems may not be adequate to be relied upon within any system of justice.

The type of data


This post was originally published on this site
Comments are closed.