Skip to main content

Find textual answers via an LLM.

Project description

LLMTextualAnswer

Python package for finding textual answers via LLMs. This is a Python port of the Wolfram Language LLMTextualAnswer function, focused on building prompts, wiring LangChain models, and parsing structured outputs.


Install

pip install LLMTextualAnswer

Usage

Question answering

from LLMTextualAnswer import LLMTextualAnswer
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini")

text = (
    "Born and raised in the Austrian Empire, Tesla studied engineering and physics "
    "in the 1870s without receiving a degree."
)

questions = ["Where born?"]

result = LLMTextualAnswer(
    text,
    questions,
    llm=llm,
    form=dict,
)

print(result)

Classification

Here is a list of workflow construction specifications:

queries = [
    'Make a classifier with the method RandomForest over the data dfTitanic; show precision and accuracy; plot True Positive Rate vs Positive Predictive Value.',
    'Make a recommender over the data frame dfOrders. Give the top 5 recommendations for the profile year:2022, type:Clothing, and status:Unpaid',
    'Create an LSA object over the text collection aAbstracts; extract 40 topics; show statistical thesaurus for "notebook", "equation", "changes", and "prediction"',
    'Compute quantile regression for dfTS with interpolation order 3 and knots 12 for the probabilities 0.2, 0.4, and 0.9.'
]

Here are possible workflows names:

workflows = ['Classification', 'Latent Semantic Analysis', 'Quantile Regression', 'Recommendations']

For each workflow spec give the corresponding (most likely) workflow name:

for q in queries:
    print("Spec  : " + q)
    print("Class : " + llm_classify(q, workflows, llm = llm, form=dict) + "\n")
# Spec  : Make a classifier with the method RandomForest over the data dfTitanic; show precision and accuracy; plot True Positive Rate vs Positive Predictive Value.
# Class : Classification
# 
# Spec  : Make a recommender over the data frame dfOrders. Give the top 5 recommendations for the profile year:2022, type:Clothing, and status:Unpaid
# Class : Recommendations
# 
# Spec  : Create an LSA object over the text collection aAbstracts; extract 40 topics; show statistical thesaurus for "notebook", "equation", "changes", and "prediction"
# Class : Latent Semantic Analysis
# 
# Spec  : Compute quantile regression for dfTS with interpolation order 3 and knots 12 for the probabilities 0.2, 0.4, and 0.9.
# Class : Quantile Regression
# 


Notes

  • For more detailed examples see the notebook "Basic-usage.ipynb.
  • llm_textual_answer and llm_classify accept "LangChain" chat/text models that support .invoke.
  • Use prompt_style="chat" or prompt_style="text" if auto-detection is not desired.
  • When you want only the prompt template, pass form="StringTemplate".

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmtextualanswer-0.1.1.tar.gz (8.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmtextualanswer-0.1.1-py3-none-any.whl (7.8 kB view details)

Uploaded Python 3

File details

Details for the file llmtextualanswer-0.1.1.tar.gz.

File metadata

  • Download URL: llmtextualanswer-0.1.1.tar.gz
  • Upload date:
  • Size: 8.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.2

File hashes

Hashes for llmtextualanswer-0.1.1.tar.gz
Algorithm Hash digest
SHA256 c7e8b916ac566bbc39cc8fc9096752a258dfb9a0da147cf161918a7b700f4efe
MD5 b78f8b57f07abd7d96cce0c9666b121a
BLAKE2b-256 8d47d4974b01508aec7cd2c0ca5f20f78cb1df840aa84b6c7f884e9be94e2d1d

See more details on using hashes here.

File details

Details for the file llmtextualanswer-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llmtextualanswer-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a60f46b22d03d30594e68c9598629970d7673f9d943ca38ef7da18e45bd14a79
MD5 9ead712511989ddec0e84aecc9e70576
BLAKE2b-256 b9daa3c7a1616485805dc5f10d993b8ef91dedbedfed425f9f03290bfb240849

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page