California Judiciary cancelled its purchase of ChatGPT Plus
The $4,080 purchase order was submitted on January 2nd and was summarized by the California Judicial Branch as "ChatGPT Plus 11 users per year", but the order was cancelled on January 12.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F41f78790-bcba-4a16-8d25-8ab86f4954c7_1024x1024.png)
2024-01-20: Updated to add comment from the California Judiciary.
The Microsoft-backed artificial intelligence company OpenAI began selling priority access to its popular Large Language Model known as ChatGPT, through a $20 per month service known as “ChatGPT Plus”, in February of last year. The California Judicial Branch submitted a $4,080 purchase order on January 2 to supply 11 of its employees with access to ChatGPT Plus, according to a previously unreported contract disclosure, but the contract was subsequently cancelled on January 12, according to a statement from the Judicial Council of California.
The Judicial Branch’s official point of contact for the contract, Deborah Mok, did not respond to a request for comment, but one week later, Blaine Corren, a Public Affairs Analyst with the Judicial Council of California, stated that “the intent of the procurement was to do a proof of concept to see if ChatGPT could reduce council staff time devoted to quality assurance related to a website redesign and migration effort—including help with processing PDFs and speeding up content review and editing code scripts.” Mr. Corren further noted that California “had previously used a trial version and the output was positive, so the decision was made to procure licenses for a proof of concept. The procurement has been pending since July 2023. We have been unable to get any comparable quotes so the procurement was cancelled in January 2024.”
Mr. Corren also noted that Chief Justice Patricia Guerrero made several statements regarding the California Judicial Branch’s plans for artificial intelligence at a Judicial Council meeting on January 19, including that “the Conference of Chief Justices (CCJ) and Conference of State Court Administrators (COSCA) recently initiated a Rapid Response Team comprised of chief justices and state court administrators” which is “dedicated to examining immediate concerns within the realm of AI.” According to Justice Guerrero, the Rapid Response Team “will involve gathering and analyzing court orders, rules, best practices, and other actions taken by the state court community in response to incidents where AI tools were used by attorneys and self-represented litigants to construct legal pleadings”, and the team “aims to develop model rules for state courts, addressing issues such as disclosure, transparency, accuracy, authenticity, and certification of AI use in court proceedings.”
According to Mr. Corren, Justice Guerrero further announced that “Administrative Presiding Justice Mary Greenwood, of the Sixth Appellate District, and Alameda County Judge Arturo Castro have graciously agreed to spearhead research efforts for our branch on the opportunities and challenges associated with AI”, with one of the goals being to “help ensure the appropriate use of AI while safeguarding the integrity of our judicial process.”
OpenAI did not respond to a request for comment on the California Judicial Branch’s purchase of ChatGPT Plus, or on which government agencies OpenAI has so far contracted with, or for how much money. In June of last year, Microsoft announced that it would begin selling access to ChatGPT to various U.S. Government agencies through its Azure for Government cloud computing program. Microsoft has so far invested at least $13 billion into OpenAI and played an increasingly central role in the company’s governance.
According to reporting in The Intercept earlier this month, OpenAI recently rewrote its “usage policies” in a manner which removed an explicit ban on the usage of ChatGPT for “military and warfare”, but retained a ban on the usage of the model to “develop or use weapons”.
The U.S. Government’s primary usecase for Large Language Models such as ChatGPT has arguably been for intelligence analysis. During a webinar with the intelligence-focused social network The Cipher Brief last September, the Director of the Central Intelligence Agency’s Open Source Enterprise, Randy Nixon, explained how the Agency’s usage of tools such as ChatGPT for automatic summarization had revolutionized their ability to analyze large volumes of surveillance data. Nixon noted that, since the CIA began incorporating Large Language Model summarizations, “The only thing that holds [us] back on collection is really … having the amount of money to go out and buy everything that’s out there.”
Large Language Models have also played a major role in the relationship between U.S. and Australian intelligence agencies, as exposed last month through a leaked welcome packet for a conference jointly hosted by former Google CEO Eric Schmidt’s Special Competitive Studies Project and the Australian Strategic Policy Institute, which is closely aligned with — and heavily funded by — the Australian Department of Defense.
Beyond giants such as Microsoft / OpenAI, the data-fusion company Palantir, and the data-labelling company Scale AI, smaller startups such as Ask Sage have also jumped into the government market for Large Language Models. Founded by the U.S. Air Force’s former Chief Software Officer, Nicolas Chaillan, Ask Sage promises the ability for government’s to securely process Controlled Unclassified Information (CUI) using myriad Large Language Models.
The desire to begin automatically summarizing the U.S. Government’s numerous feeds of high-resolution drone and satellite imagery similarly motivated the Pentagon’s controversial “Project Maven” effort. Though, in the case of Large Language Models such as ChatGPT, the firehose of information being processed is frequently social media platforms such as Telegram and Twitter rather than high-resolution imagery. As explained by Mr. Nixon in September, the Agency’s first hint of regional instability is generally “somebody tweeting, or [posting] on Telegram.”
The California court system — and, undoubtedly, numerous of its peers — are now experimenting with both how to apply Large Language Models to the firehost of legal filings, and how to respond to the usage of such tools by attorneys.
We are headed for a Minority Report scenario where every thought can be used against us. Chilling.