The dream of artificial intelligence has entered our daily lives and the ethical discussions around AI have increased as a result, especially about the amount of data these AI services collect from users. After all, where there is mass storage of potentially sensitive information, there are concerns about cybersecurity and privacy.
Microsoft’s Bing search engine, recently equipped with OpenAI’s ChatGPT and currently being rolled out, has brought its own set of concerns, as Microsoft doesn’t have the best track record of respecting the privacy of its customers.
Microsoft has occasionally been challenged about how it manages and accesses user data, though significantly less than its contemporaries like Apple, Google, and Facebook, although it handles a lot of user information — including when it sells targeted ads.
It has been targeted by certain government regulators and organizations, such as when France demanded that Microsoft stop tracking users through Windows 10, and the company responded with a series of extensive measures.
Jennifer King, director of consumer privacy at the Center for Internet and Society Stanford Law School, speculated that this is due in part to Microsoft’s longstanding position both in its respective market and the longstanding relationships with governments it has acquired through its legacy . It has more experience dealing with regulators, so might have avoided the same level of scrutiny as its competitors.
An influx of data
Microsoft, as well as other companies, must now respond to a massive influx of user chat data due to the popularity of chatbots like ChatGPT. According to the Telegraph, Microsoft has reviewers who analyze user submissions (opens in new tab) to mitigate damage and respond to potentially dangerous user input by browsing users’ conversation logs with the chatbot and intervening to moderate “inappropriate behavior”.
The company claims that it removes submissions of personal information, that users’ chat texts are only accessible to certain reviewers, and that these efforts protect users even as their conversations with the chatbot are reviewed.
A Microsoft spokesperson explained that it uses both automated review efforts (since there’s a lot of data to comb through) and manual reviewers. It is further stated that this is the standard for search engines and is also included in Microsoft’s privacy statement.
The spokesperson goes to great lengths to reassure those affected that Microsoft employs industry-standard user privacy measures, such as “pseudonymization, encryption at rest, secure and approved data access controls, and data retention procedures.”
In addition, the reviewers can only view user data based on “a verified business need only, and no third parties.” Microsoft has since updated its privacy statement to summarize and clarify the above – user information is collected and human employees at Microsoft may be able to see it.
In the spotlight
Microsoft isn’t the only company under scrutiny about how it collects and processes user data when it comes to AI chatbots. OpenAI, the company that created ChatGPT, has also disclosed that it reviews user conversations.
Recently, the company behind Snapchat announced that it is introducing a chatbot equipped with ChatGPT that will resemble the already well-known messenger chat format. It has warned users not to submit any personally sensitive information, possibly for similar reasons.
These concerns are compounded when looking at the use of ChatGPT and ChatGPT-equipped bots by those who work at companies with their own sensitive and confidential information, many of whom have warned employees not to submit confidential company information into these chatbots. Some companies, such as JP Morgan and Amazon (opens in new tab)have restricted or banned their use at work.
Personal user data has been and will continue to be a major problem in technology in general. Misuse of data, or even malicious use of data, can have serious consequences for both individuals and organizations. With every introduction of a new technology, these risks increase, but so does the potential reward.
Tech companies would do well to pay extra attention to securing our personal data as securely as possible – otherwise they will lose their customers’ trust and potentially kill their fledgling AI ambitions.