Changelog
This package is already used in production at Tune AI, please do not wait for release 1.x.x
for stability, or expect
to reach 1.0.0
. We do not follow the general rules of semantic versioning, and there can be breaking changes between
minor versions.
All relevant steps to be taken will be mentioned here.
0.6.0
distributed_chat
functionality intuneapi.apis.turbo
support. In all APIs search formodel.distributed_chat()
method. This enables fault tolerant LLM API calls.Moved
tuneapi.types.experimental
totuneapi.types.evals
0.5.13
tuneapi.types.ModelInterface
has anextra_headers
attribute in it.
0.5.12
Remove code to sanitize assistant message in for Tune and OpenAI LLM APIs.
0.5.11
Fix bug where
parallel_tool_calls
was sent even for non tool calls.
0.5.10
Remove redundant prints.
0.5.9
By default set the value
parallel_tool_calls
in OpenAI toFalse
.
0.5.8
If you have
numpy
installed in your environment, thentuneapi.utils.randomness.reservoir_sampling
will honour the seed value. If you do not havenumpy
installed, then the seed value will be ignored.Fix Bug in Gemini API body for functions with no parameters.
0.5.7
Implement
extra_headers
via__init__
as well.
0.5.6
Remove protobuf as a dependency in because bunch of other packages break. The functions are still present
0.5.5
In all implmenetations of
tuneapi.types.chats.ModelInterface
add new input to the API endpoints calledextra_headers
which is a dictionary to update the outgoing headers.
0.5.4
Standardise
tuneapi.types.chats.ModelInterface
to havemodel_id
,api_token
added to the base class.
0.5.3
Fix bug in Tune proxy API where incorrect variable
stop_sequence
was sent instead of the correctstop
causing incorrect behaviour.bump dependency to
protobuf>=5.27.3
remove
__version__
from tuneapi packageremove CLI entrypoint in
pyproject.toml
0.5.2
Add ability to upload any file using
tuneapi.endpoints.FinetuningAPI.upload_dataset_file
to support the existing way to uploading using threads.
0.5.1
Fix bug in the endpoints module where error was raised despite correct inputs
0.5.0 (breaking)
In this release we have moved all the Tune Studio specific API out of tuneapi.apis
to tuneapi.endpoints
to avoid
cluttering the apis
namespace.
- from tuneapi import apis as ta
+ from tuneapi import endpoints as te
...
- ta.ThreadsAPI(...)
+ te.ThreadsAPI(...)
Add support for finetuning APIs with
tuneapi.endpoints.FinetuningAPI
Primary environment variables have been changed from
TUNE_API_KEY
toTUNEAPI_TOKEN
and fromTUNE_ORG_ID
toTUNEORG_ID
, if you were using these please update your environment variablesRemoved CLI methods
test_models
andbenchmark_models
, if you want to use those, please copy the code from this commit
0.4.18
Fix bug where function response was tried to be deserialised to the JSON and then sent to the different APIs.
0.4.17
Fix error in
tuneapi.utils.serdeser.to_s3
function where content type key was incorrect
0.4.16
Adding support for python 3.12
Adding
tool
as a valid role intuneapi.types.chats.Message
0.4.15
When there is an error in the model API, we used to print the error message. Now we are returning the error message in the response.
0.4.14
Fix bug where a loose
pydantic
import was present
0.4.13
Bug fixes in JSON deserialisation
0.4.12
Fix bug in Threads API where incorrect structure was sent by client
Add images support for Anthropic API
Add
Message.images
field to store all images