Google Advises Employees Not To Enter Confidential Information Into Chatbots – Including Bard


 Recently, the use of artificial intelligence to facilitate work and tasks is seen to be increasing, regardless of writing reports, making PowerPoint presentations, to speed up the programming process and so on.


One issue that can be seen from the over-reliance on this artificial intelligence system is that company details and data that should be confidential are also included in the system, and this data will be used to train this artificial intelligence system, making the confidential data public.



Google has recently advised their employees to be careful when using chatbot services with artificial intelligence functions, and not to enter the company's confidential details into it, including when they use their own chatbot service, Bard.


In the meantime, programmers at Google were also told not to use code generated by these services, because it has been proven before that even if the code works, it often comes with vulnerability issues that need to be fixed later.


This is not the first time that companies have been seen advising their employees on how to use artificial intelligence. Companies such as Samsung and Amazon have previously been affected by incidents of employees entering confidential details into ChatGPT and have introduced some new rules when using these softwares.

Previous Post Next Post

Contact Form