Welcome!
This is the community forum for my apps Pythonista and Editorial.
For individual support questions, you can also send an email. If you have a very short question or just want to say hello — I'm @olemoritz on Twitter.
Tiny OpenAI ChatGPT and Whisper API for Pythonista
-
tinyOpenAI for Pythonista Github
Features
- OpenAI ChatGPT and Whisper API library written in pure Python, so it can run in Python environment on M1/M2 Mac, iPad/iPhone(e.g. Pythonista, Juno, CODE, Pyto, ...), Android.
- Supports methods that conform to the ChatGPT API JSON format for API calls. Provides an easy-to-use quick dialog method, support for contextual associations; and easy language translation method.
- Support for Whisper interface calls to recognize and parse uploaded audio files as text messages or translate them into English.
Install
method 1: pip
- open StaSh, and then pip install tinyOpenAI
method 2: Copy code
- open tinyOpenAI Github ,and found tinyOpenAI.py, select all code, and copy to pythonista, run it.
example for ChatGPT
import tinyOpenAI g = tinyOpenAI.ChatGPT('your OpenAI API_Key') # g = tinyOpenAI.ChatGPT('your OpenAI API_Key','http://192.168.3.2:3128', Model='gpt-3.5-turbo-0301',Debug=True) # Conversation print( g.query('Write a rhyming poem with the sea as the title', system='You are a master of art, answer questions with emoji icons') ) # Continuous dialogue print('======== continuous dialogue ============') print(g.query('charles has $500, tom has $300, how much money do they have in total', True, 6)) print(g.query('charles and Tom who has more money', True, 6)) print(g.query('Sort them in order of money', True, 6)) # print history print(g.Hinfo) # clear Histroy g.cHinfo() # Statistics print('Call cnt: %d, Total using tokens: %d' % (g.Call_cnt, g.Total_tokens) )
example for Whisper
import tinyOpenAI w = tinyOpenAI.Whisper('your OpenAI API_Key', Debug=True) print(w.call('test1.m4a')) # or mp3/mp4 file print(w.call('test2.m4a', 1)) # or mp3/mp4 file print('Call cnt: %d, Total Texts: %d' % (w.Call_cnt, w.Total_tokens) )
-
If you are using Stash pip install tinyOpenAI. you can run it on Stash using: tinyopenai , run a cmd line ChatGPT.
-
This post is deleted! -
update to V0.12
- Support Embedding API call, Embedding vectorization of the incoming text, support string or text array
Embedding (get the embedding vector of the text)
- init(self, API_Key='', Proxy='', Model='text-embedding-ada-002', URL='https://api.openai.com/v1/embeddings', Debug=False)
- Initialize the creation of the Embedding object, with the following parameters
- API_Key: your openAI API Key
- Proxy: If needed, set your http proxy server, e.g.: http://192.168.3.1:3128
- Model: If needed, you can set it according to the OpenAI API documentation.
- URL: If OpenAI changes the API call address, you can change it here. Note that this is a list of two addresses, the first address is the original language output, the second address is translated into English.
- Debug: if there is a network error or call error, whether to print out the error message, the default is not
- embed(data)
- data: the string or list of strings to be encoded
- The result is a list of embed vectors (1536 dimensions) corresponding to the strings, which can be obtained by
- For the input string, ret[0].get('embedding') can be used to get the vector
- For a list of input strings, you can get the list of vectors with [i.get('embedding') for i in ret]
- Statistics
- Call_cnt: the cumulative number of calls to Whisper
- Total_tokens: cumulative number of transcribed texts (Note: OpenAI is billed for the length of the audio, not the number of texts)
- Simple example
import tinyOpenAI Embedding('your OpenAI API_Key', Debug=True) r = e.embed('just for fun') print('vector dimension:',len(r[0].get('embedding'))) # Compare the similarity of two texts r = e.embed(['just for fun','hello world.']) import numpy as np print('Similarity result:',np.dot(r[0].get('embedding'), r[1].get('embedding')))
Ref:
-
@wolf71 I am just getting :
@ Error, HTTP Status_code is: 429 !
-
@rb HTTP Status_code 429
Rate limit errors ('Too Many Requests', ‘Rate limit reached’) are caused by hitting your organization's rate limit which is the maximum number of requests and tokens that can be submitted per minute. If the limit is reached, the organization cannot successfully submit requests until the rate limit is reset -
@wolf71 update to V0.13
add openAI stream support. just set stream=True
Now you don't have to wait for the results to come out all at once after a long time, instead, you can quickly watch it pop up bit by bit.
import tinyOpenAI g = tinyOpenAI.ChatGPT('your OpenAI API_Key', stream=True)) g.query('Write a rhyming poem with the sea as the title', system='You are a master of art, answer questions with emoji icons')
-
@wolf71 said in Tiny OpenAI ChatGPT and Whisper API for Pythonista:
tinyOpenAI for Pythonista Github
Features
- OpenAI ChatGPT and Whisper API library written in pure Python, so it can run in Python environment on M1/M2 Mac, iPad/iPhone(e.g. Pythonista, Juno, CODE, Pyto, ...), Android.
- Supports methods that conform to the ChatGPT API JSON format for API calls. Provides an easy-to-use quick dialog method, support for contextual associations; and easy language translation method.
- Support for Whisper interface calls to recognize and parse uploaded audio files as text messages or translate them into English.
Install
method 1: pip
- open StaSh, and then pip install tinyOpenAI
method 2: Copy code
- open tinyOpenAI Github ,and found tinyOpenAI.py, select all code, and copy to pythonista, run it.
example for ChatGPT
import tinyOpenAI g = tinyOpenAI.ChatGPT('your OpenAI API_Key') # g = tinyOpenAI.ChatGPT('your OpenAI API_Key','http://192.168.3.2:3128', Model='gpt-3.5-turbo-0301',Debug=True) # Conversation print( g.query('Write a rhyming poem with the sea as the title', system='You are a master of art, answer questions with emoji icons') ) # Continuous dialogue print('======== continuous dialogue ============') print(g.query('charles has $500, tom has $300, how much money do they have in total', True, 6)) print(g.query('charles and Tom who has more money', True, 6)) print(g.query('Sort them in order of money', True, 6)) # print history print(g.Hinfo) # clear Histroy g.cHinfo() # Statistics print('Call cnt: %d, Total using tokens: %d' % (g.Call_cnt, g.Total_tokens) )
example for Whisper
import tinyOpenAI w = tinyOpenAI.Whisper('your OpenAI API_Key', Debug=True) print(w.call('test1.m4a')) # or mp3/mp4 file print(w.call('test2.m4a', 1)) # or mp3/mp4 file print('Call cnt: %d, Total Texts: %d' % (w.Call_cnt, w.Total_tokens) )
I tried doing that and I found that with superior intelligence ChatGPT is doing much better, this is my personal opinion