Machine Learning Damaging Our Privacy

In order to build any model based in Machine Learning and Artificial Intelligence, it is required to collect a lot of data and to get accurate model, we need accurate data. For this reason, companies force to collect a lot of data from users and they send it to their big model for process and then hand it over to machine learning and AI experts to create model for prediction. The main problem is to satisfy the model, we need a lot of data and these data is being stored somewhere. We might say, it is being processed automatically and no human has access to them, but when researchers want to verify something, then they might force to read those personal information, authorities force to take a look at suspicious content and this collection, would put our privacy at risk. Because models relays on AI and Machine Learning, normally, they won’t delete data. For these reasons, I call Machine Learning and AI, one of the biggest enemy of privacy. It force researchers to collect a lot of data, but there is no sufficient information, about protecting those data. Many people argue, we are doing this to protect users, for example in spamming, we need to collect big set of email (but we won’t read them) and mark which email is spam and which is not and we are leaving this to users to classify it and we only care about text and count of words and structure of text. For those who are expert in security, will know that easily we could bypass any email spam, with our tricks, which I don’t want to explain here, because people might abuse it. Anti-Spam could block known spams and those create with semi-professional security guys, but it is helpless for experts. We are collecting a lot of data, spending so much money on servers to collect and process these data, spend so much money on universities and researchers to play around with complex math formula, just to come up with a system, which is helpless in front of experts. Some people argue, that well we have other methods, other protection ways and not everything is based on AI and Machine Learning which is true. But what, we would like to argue is why we are spending so much on this? We might deal with problem of Spam through criminal intelligence analysis, policy data center and monitoring and response team. These methods are a lot cheaper and more efficient. Of course we need spend some money to enhance them, but once we reach to the right place, we could use them to combat against cybercrimes. When we discuss with those who call themselves security experts in university, they always say, sorry we are only care about Machine Learning (because they only care about publication and not national or international security in cyberspace). When we talk with experts in criminology, they say it is interesting topic, but we are only care about law and legal issues. So we are collecting so much data, spending so much money, for unreliable systems.

There is no need to collect so much information and even, if there is a need to collect them, there is no need to keep those information forever. These problems with privacy raised, because everyone force themselves into Machine Learning and AI. If they think about something else or they let others to investigate in these areas, we could protect privacy of our users and enhance their security. As it already been mentioned, policy management, is the recommended solution and there is no need to collect so much data and even if we do, we could delete them later or let users to control their data, instead of collecting them. For these reason, I am requesting cybersecurity experts, to move away from Machine Learning and AI (I don’t say everyone should leave it, but we need people to think in different direction). Universities should open doors to young people who love cybersecurity but they prefer methods without mathematic and AI. Professors don’t understand these methods and they force everyone to follow AI direction and this put our privacy at risk. We need to open new doors to develop expertise in policy management, rather than unreliable math formulas and forcing people to use AI.



Comments are closed.

%d bloggers like this: