trulens.providers.litellm.endpoint¶
trulens.providers.litellm.endpoint
¶
Classes¶
LiteLLMCallback
¶
Bases: EndpointCallback
Attributes¶
endpoint
class-attribute
instance-attribute
¶
The endpoint owning this callback.
cost
class-attribute
instance-attribute
¶
Costs tracked by this callback.
Functions¶
LiteLLMEndpoint
¶
Bases: Endpoint
LiteLLM endpoint.
Attributes¶
tru_class_info
instance-attribute
¶
tru_class_info: Class
Class information of this pydantic object for use in deserialization.
Using this odd key to not pollute attribute names in whatever class we mix this into. Should be the same as CLASS_INFO.
instrumented_methods
class-attribute
¶
instrumented_methods: Dict[
Any, List[Tuple[Callable, Callable, Type[Endpoint]]]
] = defaultdict(list)
Mapping of classes/module-methods that have been instrumented for cost tracking along with the wrapper methods and the class that instrumented them.
Key is the class or module owning the instrumented method. Tuple value has:
-
original function,
-
wrapped version,
-
endpoint that did the wrapping.
retries
class-attribute
instance-attribute
¶
retries: int = 3
Retries (if performing requests using this class).
post_headers
class-attribute
instance-attribute
¶
Optional post headers for post requests if done by this class.
pace
class-attribute
instance-attribute
¶
pace: Pace = Field(
default_factory=lambda: Pace(
marks_per_second=DEFAULT_RPM / 60.0,
seconds_per_period=60.0,
),
exclude=True,
)
Pacing instance to maintain a desired rpm.
global_callback
class-attribute
instance-attribute
¶
global_callback: EndpointCallback = Field(exclude=True)
Track costs not run inside "track_cost" here.
Also note that Endpoints are singletons (one for each unique name argument) hence this global callback will track all requests for the named api even if you try to create multiple endpoints (with the same name).
callback_class
class-attribute
instance-attribute
¶
callback_class: Type[EndpointCallback] = Field(exclude=True)
Callback class to use for usage tracking.
callback_name
class-attribute
instance-attribute
¶
Name of variable that stores the callback noted above.
litellm_provider
class-attribute
instance-attribute
¶
litellm_provider: str = 'openai'
The litellm provider being used.
This is checked to determine whether cost tracking should come from litellm or from another endpoint which we already have cost tracking for. Otherwise there will be double counting.
Classes¶
EndpointSetup
dataclass
¶
Functions¶
get_instances
classmethod
¶
get_instances() -> Generator[InstanceRefMixin]
Get all instances of the class.
load
staticmethod
¶
load(obj, *args, **kwargs)
Deserialize/load this object using the class information in tru_class_info to lookup the actual class that will do the deserialization.
model_validate
classmethod
¶
model_validate(*args, **kwargs) -> Any
Deserialized a jsonized version of the app into the instance of the class it was serialized from.
Note
This process uses extra information stored in the jsonized object and handled by WithClassInfo.
pace_me
¶
pace_me() -> float
Block until we can make a request to this endpoint to keep pace with maximum rpm. Returns time in seconds since last call to this method returned.
run_in_pace
¶
run_in_pace(
func: Callable[[A], B], *args, **kwargs
) -> B
Run the given func
on the given args
and kwargs
at pace with the
endpoint-specified rpm. Failures will be retried self.retries
times.
run_me
¶
run_me(thunk: Thunk[T]) -> T
DEPRECATED: Run the given thunk, returning itse output, on pace with the api. Retries request multiple times if self.retries > 0.
DEPRECATED: Use run_in_pace
instead.
print_instrumented
classmethod
¶
print_instrumented()
Print out all of the methods that have been instrumented for cost tracking. This is organized by the classes/modules containing them.
track_all_costs
staticmethod
¶
track_all_costs(
__func: CallableMaybeAwaitable[A, T],
*args,
with_openai: bool = True,
with_hugs: bool = True,
with_litellm: bool = True,
with_bedrock: bool = True,
with_cortex: bool = True,
with_dummy: bool = True,
**kwargs
) -> Tuple[T, Sequence[EndpointCallback]]
Track costs of all of the apis we can currently track, over the execution of thunk.
track_all_costs_tally
staticmethod
¶
track_all_costs_tally(
__func: CallableMaybeAwaitable[A, T],
*args,
with_openai: bool = True,
with_hugs: bool = True,
with_litellm: bool = True,
with_bedrock: bool = True,
with_cortex: bool = True,
with_dummy: bool = True,
**kwargs
) -> Tuple[T, Thunk[Cost]]
Track costs of all of the apis we can currently track, over the execution of thunk.
RETURNS | DESCRIPTION |
---|---|
T
|
Result of evaluating the thunk.
TYPE:
|
Thunk[Cost]
|
Thunk[Cost]: A thunk that returns the total cost of all callbacks that tracked costs. This is a thunk as the costs might change after this method returns in case of Awaitable results. |
track_cost
¶
track_cost(
__func: CallableMaybeAwaitable[..., T], *args, **kwargs
) -> Tuple[T, EndpointCallback]
Tally only the usage performed within the execution of the given thunk.
Returns the thunk's result alongside the EndpointCallback object that includes the usage information.
wrap_function
¶
wrap_function(func)
Create a wrapper of the given function to perform cost tracking.