Home » Chatgpt, new things to cook for legal managers

Chatgpt, new things to cook for legal managers

by admin
Chatgpt, new things to cook for legal managers

There are six types of risk that corporate attorneys should keep an eye out for in their dealings with ChatGpt. To affirm it, proposing an analysis of the various areas of action, is Gartnerwhich thus aims to provide new tools to ensure responsible use of tools Generative AI by companies.

Possible legal, reputational and financial consequences

“The output generated by ChatGpt and from other Llm (Large language model) tools is subject to several risks – he declares Ron Friedmann, analista senior director della Legal & Compliance Practice di Gartner -. Legal and compliance leaders should assess whether these issues pose a material risk to their business and what controls are needed. Otherwise, companies could be exposed to legal, reputational and financial consequences.”

Optimize processes to save resources: everything you can do with RPA

I you are risks of ChatGpt that legal and compliance leaders should consider include:

Risk 1 – False and inaccurate answers

The most common problem of ChatGpt and other Llm tools is the tendency to provide incorrect information, even if superficially plausible.

ChatGpt he is also prone to “hallucinations,” including made-up and wrong answers and non-existent legal or scientific citations,” he said Friedmann. “Legal and compliance leaders should issue guidelines requiring employees to verify the accuracy, appropriateness and actual usefulness of any results generated by ChatGpt before accepting it”.

Risk 2 – Privacy and data confidentiality

Legal and compliance managers need to be aware that all information entered in ChatGpt, if the chat history is not disabled, can become part of the training dataset.

See also  Jiatong Group held a party committee (expanded) meeting to convey the spirit of the ninth party congress of the city

“Sensitive, proprietary or confidential information used in prompts can be incorporated into responses for users outside the company,” he said Friedmann. “Legal and Compliance must establish a compliance framework for the use of ChatGpt and clearly prohibit the entry of sensitive organizational or personal data into public LLM tools”.

Risk 3 – Model and output distortion

Despite the efforts of OpenAI to minimize i prejudices and discriminations in ChatGptthere have been known cases of these problems before and they are likely to persist despite your best efforts.

“Eliminating bias completely is likely impossible, but legal and compliance authorities need to stay apprised of the laws governing AI bias and ensure their guidelines are compliant,” he said. Friedmann. “This may involve working with subject matter experts to ensure output is reliable, and with review and technology functions to establish data quality controls.”

Risk 4 – Risks related to intellectual property (IP) and copyright

ChatGpt, in particular, is based on a large amount of internet data that probably includes copyrighted material. Therefore, its outputs have the potential to violate the protections of the copyright or intellectual property.

ChatGpt it does not offer references to sources or explanations of how its output is generated,” he said Friedmann. “Legal and compliance officers should keep an eye out for any copyright law changes that apply to ChatGpt output and require users to check any output generated to ensure it does not infringe copyright or other intellectual property rights.”

Risk 5 – Computer fraud risks

The bad guys are already abusing ChatGpt per generate false information (for example, fake reviews). Also, applications that use Llm models, including ChatGptare also susceptible to prompt injection, a hacking technique in which malicious adversary prompts are used to trick the model into performing unexpected tasks, such as writing malware code or developing phishing sites that look like known sites.

See also  Focus Interview: Smart "Convergence" Zhongguancun Discusses Major Trends_Guangming.com

“Legal and compliance leaders should coordinate with cyber risk owners to consider whether and when to issue reminders to corporate cybersecurity personnel on this issue,” he said. Friedmann. “They should also conduct a due diligence source audit to verify the quality of their information.”

Risk 6 – Consumer protection risks

Companies that do not disclose to consumers the use of ChatGpt (for example, in the form of customer service chatbots) are at risk of losing customer trust and being accused of unfair practices under various laws. For example, California chatbot law requires that in certain interactions with consumers, organizations must communicate clearly and visibly that the consumer is communicating with a bot.

“Legal and compliance leaders need to ensure that the use of ChatGpt by their organizations complies with all relevant laws and regulations and that appropriate information has been provided to customers,” he said Friedmann.

@ALL RIGHTS RESERVED

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy