We researched OpenAI’s patents and curated them in an Airtable as a convenient reference. To our surprise, OpenAI only has seven active granted public patents and one pending patent.
When exploring patent documents, you will encounter various publication types that indicate the status and stage of the patent application process; here is a quick summary.
A1: Pending application, published 18 months after the priority date.
B1: Granted patent, not previously published as A1.
B2: Granted patent, previously published as A1.
It’s interesting to note how quickly OpenAI was granted these patents. OpenAI averaged 11 months from the application date to the grant date, which is impressive compared to the industry average of 24 months.
Note: This list will remain updated as an easy-to-reference point for upcoming OpenAI patents from newest to oldest.
Patent Number: US 11983488 B1
Application Date: 2023-03-14
Published Date: 2024-05-14
Inventors: Puri; Raul, Yuan; Qiming, Paino; Alexander, Tezak; Nikolas, Ryder; Nicholas
Link to Patent: US11983488B1
Abstract:
Disclosed herein are methods, systems, and computer-readable media for automatically generating and editing text. In an embodiment, a method may include receiving an input text prompt and receiving one or more user instructions. The method may also include accessing a language model based on the input text prompt and the one or more user instructions. The method may also include outputting, using the accessed language model, language model output text. The method may also include editing the input text prompt based on the language model and the one or more user instructions by replacing at least a portion of the input text prompt with the language model output text.
Patent Number: US11983806B1
Application Date: 2023-08-30
Published Date: 2024-05-14
Inventors: Ramesh; Aditya, Nichol; Alexander, Dhariwal; Prafulla
Link to Patent: US11983806B1
Abstract:
Disclosed herein are methods, systems, and computer-readable media for regenerating a region of an image with a machine learning model based on a text input. Disclosed embodiments involve accessing a digital input image. Disclosed embodiments involve generating a masked image by removing a masked region from the input image. Disclosed embodiments involve accessing a text input corresponding to an image enhancement prompt. Disclosed embodiments include providing at least one of the input image, the masked region, or the text input to a machine learning model configured to generate an enhanced image. Disclosed embodiments involve generating, with the machine learning model, the enhanced image based on at least one of the input image, the masked region, or the text input.
Patent Number: US 11922144 B1
Application Date: 2023-03-20
Published Date: 2024-03-05
Inventors: Mishchenko; Andrey, Medina; David, McMillan; Paul, Eleti; Athyuttam
Link to Patent: US11922144B1
Abstract:
Disclosed herein are methods, systems, and computer-readable media for integrating a particular external application programming interface (API) with a natural language model user interface. In one embodiment, a method includes receiving a first input at the natural language model user interface, determining the first input includes a request to integrate the particular external application programming interface (API) with the natural language model user interface, identifying the particular external API based on the received input, integrating the particular external API with the natural language model user interface, accessing the particular external API based on the first input or a second input at the natural language model user interface, and transmitting, based on the accessing, a response message to the natural language model user interface, the response message including a result of the accessing.
Patent Number: US 11922550 B1
Application Date: 2023-03-30
Published Date: 2024-03-05
Inventors: Ramesh; Aditya, Dhariwal; Prafulla, Nichol; Alexander, Chu; Casey, Chen; Mark
Link to Patent: US11922550B1
Abstract:
Disclosed herein are methods, systems, and computer-readable media for generating an image corresponding to a text input. In an embodiment, operations may include accessing a text description and inputting the text description into a text encoder. The operations may include receiving, from the text encoder, a text embedding, and inputting at least one of the text description or the text embedding into a first sub-model configured to generate, based on at least one of the text description or the text embedding, a corresponding image embedding. The operations may include inputting at least one of the text description or the corresponding image embedding, generated by the first sub-model, into a second sub-model configured to generate, based on at least one of the text description or the corresponding image embedding, an output image. The operations may include making the output image, generated by the first second sub-model, accessible to a device.
Patent Number: US 11886826 B1
Application Date: 2023-03-14
Published Date: 2024-01-30
Inventors: Bavarian; Mohammad, Jun; Heewoo
Link to Patent: US11886826B1
Abstract:
Disclosed herein are methods, systems, and computer-readable media for automatically generating and inserting text. In an embodiment, a method may include receiving an input text prompt comprising a prefix portion and a suffix portion. The method may also include accessing a language model based on the input text prompt, and determining a set of context parameters based on the input text prompt and the language model. The method may also include generating an output text prompt based on the set of context parameters and the language model, and inserting the output text prompt into the input text prompt.
Patent Number: US 11887367 B1
Application Date: 2023-04-19
Published Date: 2024-01-30
Inventors: Baker; Bowen, Akkaya; Ilge, Zhokhov; Peter, Huizanga; Joost, Tang; Jie, Ecoffet; Adrien, Houghton; Brandon, Gonzalez; Raul Sampedro, Clune; Jeffrey
Link to Patent: US11887367B1
Abstract:
Disclosed herein are methods, systems, and computer-readable media for training a machine learning model to label unlabeled data and/or perform automated actions. In an embodiment, a method comprises receiving unlabeled digital video data, generating pseudo-labels for the unlabeled digital video data, the generating comprising receiving labeled digital video data, training an inverse dynamics model (IDM) using the labeled digital video data, and generating at least one pseudo-label for the unlabeled digital video data, wherein the at least one pseudo-label is based on a prediction, generated by the IDM, of one or more actions that mimic at least one timestep of the unlabeled digital video data. In some embodiments, the method further comprises adding the at least one pseudo-label to the unlabeled digital video data and further training the IDM or a machine learning model using the pseudo-labeled digital video data.
Patent Number: US 12008341 B2
computer code
Application Date: 2023-05-23
Published Date: 2024-06-11
Inventors: Chen; Mark, Tworek; Jerry, Sutskever; Illya, Zaremba; Wojciech, Jun; Heewoo, Ponde De Olivera Pinto; Henrique
Link to Patent: US12008341B2
Abstract:
Disclosed herein are methods, systems, and computer-readable media for generating natural language based on computer code input. In an embodiment, a method may comprise one or more of: accessing a docstring generation model configured to generate docstrings from computer code; receiving one or more computer code samples; generating, using the docstring generation model and based on the received one or more computer code samples, one or more candidate docstrings representing natural language text, each of the one or more candidate docstrings being associated with at least a portion of the one or more computer code samples; identifying at least one of the one or more candidate docstrings that provides an intent of the at least a portion of the one or more computer code samples; and/or outputting, via a user interface, the at least one identified docstring with the at least a portion of the one or more computer code samples.
Patent Number: US 20240020116 A1
computer code
Application Date: 2023-05-23
Published Date: 2024-06-11
Inventors: Chen; Mark, Tworek; Jerry, Sutskever; Illya, Zaremba; Wojciech, Jun; Heewoo, Ponde De Olivera Pinto; Henrique
Link to Patent: US20240020116A1
Abstract:
Disclosed herein are methods, systems, and computer-readable media for generating natural language based on computer code input. In an embodiment, a method may comprise one or more of: accessing a docstring generation model configured to generate docstrings from computer code; receiving one or more computer code samples; generating, using the docstring generation model and based on the received one or more computer code samples, one or more candidate docstrings representing natural language text, each of the one or more candidate docstrings being associated with at least a portion of the one or more computer code samples; identifying at least one of the one or more candidate docstrings that provides an intent of the at least a portion of the one or more computer code samples; and/or outputting, via a user interface, the at least one identified docstring with the at least a portion of the one or more computer code samples.
Patent Number: US 20240020096 A1
Application Date: 2023-05-23
Published Date: Pending
Inventors: Chen; Mark, Tworek; Jerry, Sutskever; Illya, Zaremba; Wojciech, Jun; Heewoo, Ponde De Olivera Pinto; Henrique
Link to Patent: US 20240020096 A1
Abstract:
Disclosed herein are methods, systems, and computer-readable media for generating computer code based on natural language input. In an embodiment, a method may comprise one or more of: receiving a docstring representing natural language text specifying a digital programming result; generating, using a trained machine learning model, and based on the docstring, a computer code sample configured to produce respective candidate results; causing the computer code sample to be executed; identifying, based on the executing, a computer code sample configured to produce a particular candidate result associated with the digital programming result; performing at least one of outputting, via a user interface, the identified computer code sample, compiling the identified computer code sample, transmitting the identified computer code sample to a recipient device, storing the identified computer code sample, and/or re-executing the identified computer code sample.
We believe that it is crucial for AI content detectors reported accuracy to be open, transparent, and accountable. The reality is, each person seeking AI-detection services deserves to know which detector is the most accurate for their specific use case.