Home Ā» Questions for application engineers, can ChatGPT solve them?

Questions for application engineers, can ChatGPT solve them?

by admin
Questions for application engineers, can ChatGPT solve them?

Alexander Goldstein, Central Applications Engineer, Analog Devices put ChatGPT to the test by asking questions encountered in the job of application engineer.

With the release of ChatGPT, what it looked like just a few months ago science fiction now it seems plausible: AI has arrived. Eye-catching headlines, visible everywhere, demonstrate how AI can develop code, teach languages, compose music and create art. It seems that artificial intelligence is finallyā€¦ intelligent. However, according to OpenAI CEO Sam Altman, many of the sensationalist claims you see online are exaggerations, to appeal to the public.

Quesiti per application engineer

To test the AI, it was decided to submit ChatGPT (February 2023 release) problems similar to those encountered in the work of application engineer. AI has been found to be a promising tool, but it still has a long way to go before it can compete with human intelligence. In this article I will present some experiments and my evaluation of the performance of ChatGPT in various engineering requests.

Solving general problems

ChatGPT is known to be an excellent system for information aggregation and synthesis. This explains how questions about general problems (even using specific part numbers) generate brilliant answers. By asking ChatGPT to solve the most common queries (for example: ā€œWhy donā€™t I see the output on DOUT pin of component ___?ā€), ChatGPT provides great suggestions for general troubleshooting. Including checking connections, power supplies and input signal range. These answers arenā€™t particularly exciting or innovative, but they are incredibly helpful. Because most product problems are solved with standard troubleshooting procedures. ChatGPT excels at this type of generic response.

See also  films, series and programs to see on May 28th...

A high-level answer

Another question that needs a similar high-level answer is: ā€œIā€™m using an I2C device and Iā€™m not getting any response from the slave device. I have 7 devices connected to the master device, using 10k pull-ups at full data rate. Can you help me solve the problem?ā€. In this case, ChatGPT demonstrates impressive implicit domain knowledge. Though it does suggest fairly standard I2C troubleshooting procedures.

Can ChatGPT solve questions for application engineers?

Although the result itself is not particularly originalGPTā€™s ability to rapidly aggregate knowledge allows it to generate useful answers to generalized questions, even in smaller domains that may require in-depth background. This indicates that this type of AI can be very useful in providing first steps and insights, even in highly specific problem domains.

Pseudo-truth generation

As the scope of the problem narrows down, a disturbing ChatGPT malfunction condition emerges. It generates responses that appear to be correct but, upon closer inspection, are not. For example, when asked about using a component as a pin-for-pin ā€œdrop-inā€ replacement for another 56-pin TQFN component, ChatGPT confidently states that the components have 5 pins.

ChatGPT errors

It also gives (among other errors) incorrect supply voltages and output current limits and claims that the two components are incompatible. Every alleged ā€œdifferenceā€ that ChatGPT reports between such products (in this case, MAX4936 and MAX4937) is false: the components are almost identical! As this ā€œpin-for-pinā€ prompt demonstrates, high specificity prompts quickly reveal the algorithmic origins of ChatGPT. They also show the limits of trustworthiness of information generated by AI.

See also  If gas and oil heaters are banned: These owners can breathe a sigh of relief

Pass the tests by guessing

Despite the smart semblance, ChatGPT is basically a probability-based machine. As a subset of a class of AI called ā€œLarge Language Modelsā€ (LLM), it works by predicting the next statistically most likely word (or ā€œtokenā€) in a text string. Even when she claims to, sheā€™s not consulting a spec sheet, looking for information, or even connected to the internet!

Therefore, what looks like ā€œreasoningā€ is actually pattern recognition and extrapolation, giving what is most likely to happen based on the training data. This explains its failures in well-defined problems: the statistically probable extrapolation becomes completely false when it is conditioned by specific questions.

Questions for application engineers, reasoning and errors

In addition to providing potentially misinformation, the structure of the LLMs has dangerous aspects with regards to credibility and the capacity on our part to use them safely. ChatGPTā€™s probabilistic result conforms to statistical regularities that challenge the heuristics we use to distinguish fact from fiction. When such heuristics fails, the best way to check for errors is to fix the problem directlyā€¦ But at this point, whatā€™s the point of asking a computer to do the work for us?

Earn public trust

This paradox suggests a probable improper use of artificial intelligence. AI is only useful to the extent that we can trust it. But given the incentives, it seems inevitable to compromise trust to use it more broadly. Understanding how AI mimics intelligence and demonstrating its failure dynamics could break the illusion in such a way as to lower our expectations and prevent such naive mistakes.

See also  Climate Solutions: Vivobarefoot sports shoes for durability and local manufacturing

A question for ChatGPT

Question was asked to ChatGPT as to why it was generating incorrect responses. Part of his answer sums up the problem perfectly: OpenAIā€™s language models, including GPT-3, are trained on a large body of text, which includes many errors and inaccuracies. The models use statistical patterns in the data to predict the next word or sequence of words, so it is possible for the models to generate incorrect information. Finally, the following question was asked: ā€œCan I trust your answers?ā€.

Can I trust your answers?

As an AI language model, I have no feelings or intentions, so the concept of actually trusting me doesnā€™t apply. However, I can assure you that I have been designed to provide accurate and helpful answers to the best of my ability. Based on the data and algorithms that were used to train me. I strive to be as reliable and impartial as possible. However, when using any source of information, including AI language models like myself, it is always advisable to exercise critical thinking and fact check.

You can transform the world

Effectively. AI has the enormous power to transform our world. But as generalized AI becomes more widespread, accessible, and advanced, continued research is essential to understand its capabilities and limitations. Over time, we will no doubt outsource more critical tasks to AIs like ChatGPT, and we need to be able to make informed decisions and use ChatGPT responsibly.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy