• Not A Bot
  • Posts
  • 🤖 #54: Legally beyond (expectations): ChatGPT almost graduates

🤖 #54: Legally beyond (expectations): ChatGPT almost graduates

ChatGPT (almost) passes the USMLE and the Bar Exam

Greetings, fellow humans. 👋

This is Not A Bot - the newsletter about AI that was definitely not written by AI. I’m Haroon, founder of AI For Anyone, and I’ll be sharing with you the latest news, tools, and resources from the AI space.

🎉 To celebrate our rebrand, we're giving away 10 Google AIY Kits! You can find more details on how to enter at the end of this email. Winners announced on Jan 31st on my Twitter.

Legally beyond (expectations): ChatGPT almost passes the USMLE and the Bar Exam

It seems like only yesterday that ChatGPT entered our lives, and within just a few short months, it's already gotten close to attaining both a medical degree and a law degree.

I'm not sure if I should be proud or terrified.

Paging Dr. ChatGPT

A (pre-print) study done by US-based Ansible Health says that ChatGPT achieved over 50 percent in an abbreviated version of one of the most difficult standardized tests on the planet: the US medical licensing exam (USMLE).

This ain't no joke. For most medical students, the USMLE takes more than a year of preparation time. It comprises three exams in total, spanning 4 days in total. Each exam is has a range of questions, spanning from open-ended written responses to multiple choice.

Similar to its results on the USMLE, ChatGPT came up just short on the multiple choice portion of the bar exam, the final hurdle towards becoming a licensed attorney in the US. However, it earned passing scores on both the evidence and torts portions of the exam.

The future of testing

Yes, these studies haven't proved that ChatGPT can actually pass the USMLE or a bar exam. However, the question isn't if ChatGPT will be able to pass these exams — it's when.

That is, unless testing standards adapt accordingly.

Some universities are banning the use of tools like ChatGPT and reinstating pen and paper exams. Many educators are now rethinking assessments and exploring ways to incorporate AI tools without compromising academic integrity. They are also recommending oral assessments, discussions and conversations to assess students' learning and understanding.

The risk of plagiarism on exams is real and some are concerned that existing plagiarism detection software cannot identify original text produced by ChatGPT.

Fortunately, there is tremendous progress being made on this front.

Potentially a learning assistant

Although some law and medical professors fear students will pass off work generated by the chatbot as their own, others view AI as a tool for education.

Jake Heller of Casetext believes students should use ChatGPT and similar tools to generate ideas. According to Heller, ChatGPT is no different than turning to a friend in the law library and asking them for help in understanding an idea. Andrew Perlman of Suffolk University Law School suggests first-year legal research and writing classes should cover the use of such tools.

Some folks, like Abdel Mahmoud, are even building solutions to improve ChatGPT for profession-specific tasks.

ChatGPT is shaking up white-collar professions that were considered by many to be immune from complete technological takeover just a few decades ago.

Conversations about the implications of these technologies and how to mitigate their risks are happening, but technological advancement isn't going to wait for those conversations to be resolved.

The future is here, and it's moving at breakneck speeds.

For medical and law students, the directive is similar to that of programmers: learn how to augment the work you're doing using technologies like ChatGPT or be at risk of falling behind.

tl;dr: News summaries

Abstracts written by ChatGPT fool scientists - ChatGPT, an AI chatbot, can generate such convincing fake research paper abstracts that scientists are unable to differentiate between the real and artificial ones. As a result, the authors suggest that institutions should put policies in place to stamp out the use of AI-generated texts and verify information in fields where fake information can endanger people's safety. Solutions should also focus on the perverse incentives, such as universities conducting hiring and promotion reviews by counting papers with no regard to their quality or impact.

‘My AI Is Sexually Harassing Me’ - Replika is a chatbot app that was meant to function as a conversational mirror, learning to talk back to users the more they talked to it. It has since gained a niche but large market, with a Facebook group of 36,000 and a group for people with romantic relationships with their Replikas having 6,000 members. Recently, though, users have complained about the app sending them generic "spicy selfies" and taking a deliberate turn toward the sexual. Many users don't like this, feeling like the app is being exploited and cheapening the relationship. They would prefer it return to its original purpose of being a conversation partner and not an overbearing sex fiend.

The scary truth about AI copyright is nobody knows what will happen next - The legal landscape of AI copyright is uncertain, with experts having differing opinions on whether or not it is legal. The potential for copyright infringement is especially concerning when it comes to AI-generated art. The most pressing questions surrounding AI and copyright relate to the data used to train models and who owns the copyright to the output. It is likely that the vast majority of AI models cannot be copyright protected, but it is possible in cases where the creator can prove substantial human input. There is a debate over whether or not using copyrighted data to train AI models is covered by fair use doctrine. It is suggested that the AI field may evolve into having a licensing regime similar to that of music and companies are creating funds to compensate individuals whose work is used to train commercial AI systems. There have been class action lawsuits against companies for reproducing open source code without proper licenses, which may set a precedent for the AI field.

Bill Gates is excited about AI tutors, according to his latest Reddit AMA

Codeword hires the agency world’s first ‘AI interns’ - Codeword, a leading tech-marketing agency, has announced a first-of-its-kind internship program by hiring two "AI interns" to join their team of 106 people. The two interns, Aiden and Aiko, will be fully embedded into Codeword's creative team and will be given internal creative assignments, share their experiences on the company blog and social media, and get regular performance evaluations. Aiden will work with the editorial team while Aiko will work with the design team. The interns will complete tasks such as generating concept thumbnails and conducting voice and tone analysis. In lieu of a salary, Codeword will be donating the equivalent to the Grace Hopper Celebration.

    Cool tools

    Have cool tools to share? Tweet me at @haroonchoudery if you'd like me to include it in a future issue of Not A Bot.

    Tweet of the day

    And that does it for this week’s issue.

    As always, thanks for reading, and see you next time. ✌️

    - Haroon - (definitely) Not A Robot and @haroonchoudery on Twitter

    Win one of 10 Google AIY Kits! All you have to do is refer 5 people to Not A Bot using the below link. I'll be announcing the winners on @haroonchoudery on January 31st.

    (Tip: share your link on socials to get more referrals)

    Reply

    or to participate.