Tech News

Artificial intelligence “Claude” successfully passed the exam in jurisprudence

Economics professor Alex Tabarroca of George Mason University said Anthropic’s “Claude” AI model had successfully passed the law and economics exam. Albeit at the limit of their abilities, the examiners noted the low level of knowledge of AI, but still gave him positive marks for the ability to reasonably defend his point of view. In this regard, Claude is head and shoulders above the popular ChatGPT – although it is inferior to it in a number of other parameters.

We are seeing an explosive growth in the development and use of AI in applied areas – but at the same time, the entire industry, in fact, is still marking time, not understanding which development path to choose. Claude is a prime example of this because it is built on a “constitutional scheme” that forbids the use of subjective evaluations of data in training. Simply put, AI does not operate with the concepts of “good”, “bad” and “forbidden”, it simply uses the entire set of available data.

AIs trained in “safe schemes” avoid talking about controversial topics, which makes them convenient for wide application, but useless when solving critical and controversial problems. Claude, on the contrary, is ready to formulate his opinion on almost any topic, but at the same time he can easily refuse to communicate if he considers the request stupid or provocative. Claude is more like an artificial person, while ChatGPT is a typical intellectual tool

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button