Tag: image classification

Is machine smarter than human?

Of course no. However, given what I heard & saw at this week’s AI conferences/summit (blow), AI experts seemed to overwhelmingly thought they would achieve that goal (or as smart as human) within 30-40 years.

I honestly do not know where these experts got their confidence from. To clarify, there are other elements of AI, but when they talked about AI, it is mostly about machine learning; and then by machine learning, it is mostly about “deep” neural network (and NN is really just polynomial regression as we discussed previously). As I explained before, the current AI practice has a serious problem of overfitting/overmodelling. The AI can be fooled by a single pixel change or a silly mask, which would almost certainly never happen to an adult (even a child) with normal IQ (again, examples below).

Thanks to the conferences/summit, I heard and thus can report more interesting examples. These issues were raised in ethics related sessions, so naturally they were interested to those who have concerns on AI ethics.

The first is Google Photos. You may heard that a few years ago Google incorrectly labeled a black woman as gorilla. At the time, Google vowed “taking immediate action to prevent this type of result from appearing”.

BBC: Google apologises for Photos app’s racist blunder

After around 2.5 years later, Google did fix the problem. You may be forgiven to think Google now had developed some better image classifier so they could reliably label images. But no, all they did was to ban “gorilla” (or “chimp,” “chimpanzee,” and “monkey” for that matter) altogether (see reports by The Verge and Guardian). Yes, Google “fixed” this problem by never label photos as gorillas! Luckily they could still identify pandas in your Google Photos library, so not all is lost. Clearly, it is too difficult for Google Photos to reliably differentiate between gorilla and human, which again was obvious to an adult or even a child with normal IQ.

In the above example at least no one was physically harmed. The next example is much more serious. Researchers have found that some minor changes to the traffic signs could lead misclassifications that could have serious consequence.

IEEE Spectrum: Camouflage graffiti and art stickers cause a neural network to misclassify stop signs as speed limit 45 signs or yield signs (paper).

As seen above, just throwing some stickers on the sign, the deep learning classifier completely messed up, by identifying a STOP sign as 45 speed limit sign. Obviously this is quite dangerous.

You don’t need to pass a driving test to mistake a STOP sign for a speed limit sign, just as almost certainly no human (with normal IQ) would be confused by a single pixel change, or a silly mask, or a black man/woman. The current practice AI is not smart at all – on the contrary, it is quite stupid.

The fundamental issue with current AI is that it is not based on rational reasoning, but instead some approximation essentially. As I explained previously, the “AI” never tried to understand what really is going on (unlike what scientists would do). If they saw something like y=sin(x), all they did is using some complex function/features, such as higher degree polynomials, to approximate the observed data. While this may work fine for “usual” data, typically in a laboratory like restricted settings, when something unexpected happened, e.g. due to rare cases, hacking, the system would fail.

I am not saying AI or deep learning is not a good tool. It is especially useful in image (or similarly video) classification/recognition. What I emphasised here is I am ok with them as long as the application of it would not have any serious consequences or when it does it can be reviewed by a human before any actions (that was when I laughed when I heard a panel discussion on whether AI could replace medical scientists). Using face image to unlock a personal phone may be fine, but not classified government information. Even Apple knows it – iPhone actually requires you to enter PIN from time to time to ensure you are who you are. Similarly, labelling images while banning sensitive labels may be fine, but not full automated driving – yes we are still very far from that point.