Changelog
Source:NEWS.md
tidyprompt (development version)
-
answer_using_r()
: fixed error with unsafe conversion of resulting object to character
tidyprompt 0.2.0
CRAN release: 2025-08-25
Add provider-level prompt wraps (
provider_prompt_wrap()
) these are prompt wraps which can be attached to a LLM provider object. They can be applied to any prompt which is sent through this LLM provider, either before or after prompt-specific prompt wraps. This is useful when you want to achieve certain behavior for various prompts, without having to re-apply the same prompt wrap to each promptanswer_as_json()
: support ‘ellmer’ definitions of structured output (e.g.,ellmer::type_object()
).answer_as_json()
can convert between ellmer definitions and the previous R list objects which represent JSON schemas; thus, ‘ellmer’ and R list object definitions work with both regular and ‘ellmer’ LLM providers. When using anllm_provider_ellmer()
,answer_as_json()
will ensure the native ‘ellmer’ functions for obtaining structured output are usedanswer_using_tools()
: support ‘ellmer’ definitions of tools (fromellmer::tool()
).answer_using_tools()
can convert between ‘ellmer’ tool definitions and the previous R function objects with documentation fromtools_add_docs()
; thus, ‘ellmer’ andtools_add_docs()
definitions work with both regular and ‘ellmer’ LLM providers. When using anllm_provider_ellmer()
,answer_using_tools()
will ensure the native ‘ellmer’ functions for registering tools are used.answer_using_tools()
: because of the above, and the fact that package ‘mcptools’ returns ‘ellmer’ tool definitions withmcptools::mcp_tools()
,answer_using_tools()
can now also be used with tools from Model Context Protocol (MCP) serverssend_prompt()
can now return an updated ‘ellmer’ chat object when using anllm_provider_ellmer()
(containing for instance the history of ‘ellmer’ turns and tool calls). Additionally fixed issues with how turn history is handled in ‘ellmer’ chat objectssend_prompt()
’sclean_chat_history
argument is now defaulted toFALSE
, as it may be confusing for users to see cleaned chat histories without having actively requested this. Ifreturn_mode = "full"
,$clean_chat_history
is also no longer included whenclean_chat_history = FALSE
llm_provider_openai()
now supports (as default) the OpenAI responses API, which allows setting parameters like ‘reasoning_effort’ and ‘verbosity’ (relevant for gpt-5). The OpenAI chat completions API is also still supportedllm_provider_google_gemini()
has been superseded byllm_provider_ellmer(ellmer::chat_google_gemini())
Add a
json_type
&tool_type
field to LLM provider objects; when automatically determining the route towards structured output (inanswer_as_json()
) and tool use (inanswer_using_tools()
), this can override the type decided by theapi_type
field (e.g., user can use this field to force the text-based type, for instance when using an OpenAI type LLM provider but with a model which does not support the typical OpenAI API parameters for structured output)Update how responses are streamed (with
httr2::req_perform_connection()
, sincehttr2::req_perform_stream()
is being deprecated)Fix bug where the LLM provider object was not properly passed on to
modify_fn
inprompt_wrap()
, which could lead to errors when dynamically constructing prompt text based on the LLM provider type
tidyprompt 0.1.0
CRAN release: 2025-08-18
New prompt wraps
answer_as_category()
andanswer_as_multi_category()
New
llm_break_soft()
interrupts prompt evaluation without errorNew experimental provider
llm_provider_ellmer()
forellmer
chat objectsOllama provider gains
num_ctx
parameter to control context window sizeset_option()
andset_options()
are now available for the Ollama provider to configure optionsError messages are more informative when an LLM provider cannot be reached
Google Gemini provider now works without errors in affected cases
Chat history handling is safer; rows with
NA
values no longer cause errors in specific casesFinal-answer extraction in chain-of-thought prompts is more flexible
Moved repository to https://github.com/KennispuntTwente/tidyprompt