In three previous blog posts, we have discussed recent inventorship issues surrounding Artificial Intelligence (“AI”) and its implications for life sciences innovations – focusing specifically on scientist Stephen Thaler’s attempt to obtain a patent for an invention created by his AI system called DABUS (“Device for Autonomus Bootstrapping of Unified Sentence). Most recently, we considered Thaler’s appeal of the September 3, 2021 decision out of the Eastern District of Virginia, which ruled that under the Patent Act, an AI machine cannot qualify as an “inventor.” Continuing this series, we now consider the USPTO’s recently filed opposition to Thaler’s appeal.
Our previous blog posts, Artificial Intelligence as the Inventor of Life Sciences Patents? and Update on Artificial Intelligence: Court Rules that AI Cannot Qualify As “Inventor,” discuss recent inventorship issues surrounding AI and its implications for life sciences innovations. Continuing our series, we now look at the appeal recently filed by Stephen Thaler (“Thaler”) in his quest to obtain a patent for an invention created by AI in the absence of a traditional human inventor.
Striking a blow to patent applicants seeking to assert inventorship by artificial intelligence (“AI”) systems, the U.S. District Court for the Eastern District of Virginia ruled on September 3, 2021 that an AI machine cannot qualify as an “inventor” under the Patent Act. The fight is now expected to move to the Federal Circuit on appeal.
The question whether an artificial intelligence (“AI”) system can be named as an inventor in a patent application has obvious implications for the life science community, where AI’s presence is now well established and growing. For example, AI is currently used to predict biological targets of prospective drug molecules, identify candidates for drug design, decode genetic material of viruses in the context of vaccine development, determine three-dimensional structures of proteins, including their folding form, and many more potential therapeutic applications.
As we mentioned in the early days of the pandemic, COVID-19 has been accompanied by a rise in cyberattacks worldwide. At the same time, the global response to the pandemic has accelerated interest in the collection, analysis, and sharing of data – specifically, patient data – to address urgent issues, such as population management in hospitals, diagnoses and detection of medical conditions, and vaccine development, all through the use of artificial intelligence (AI) and machine learning. Typically, AIML churns through huge amounts of real world data to deliver useful results. This collection and use of that data, however, gives rise to legal and practical challenges. Numerous and increasingly strict regulations protect the personal information needed to feed AI solutions. The response has been to anonymize patient health data in time consuming and expensive processes (HIPAA alone requires the removal of 18 types of identifying information). But anonymization is not foolproof and, after stripping data of personally identifiable information, the remaining data may be of limited utility. This is where synthetic data comes in.