ChatNetmind
- TODO: Make sure API reference link is correct.
This will help you getting started with Netmind chat models. For detailed documentation of all ChatNetmind features and configurations head to the API reference.
- TODO: Add any other relevant links, like information about models, prices, context windows, etc. See https://python.langchain.com/docs/integrations/chat/openai/ for an example.
Overviewโ
Integration detailsโ
- TODO: Fill in table features.
- TODO: Remove JS support link if not relevant, otherwise ensure link is correct.
- TODO: Make sure API reference links are correct.
Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatNetmind | langchain-netmind | โ /โ | beta/ โ | โ /โ |
Model featuresโ
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
โ /โ | โ /โ | โ /โ | โ /โ | โ /โ | โ /โ | โ /โ | โ /โ | โ /โ | โ /โ |
Setupโ
- TODO: Update with relevant info.
To access Netmind models you'll need to create a/an Netmind account, get an API key, and install the langchain-netmind
integration package.
Credentialsโ
- TODO: Update with relevant info.
Head to (TODO: link) to sign up to Netmind and generate an API key. Once you've done this set the NETMIND_API_KEY environment variable:
import getpass
import os
if not os.getenv("NETMIND_API_KEY"):
os.environ["NETMIND_API_KEY"] = getpass.getpass("Enter your Netmind API key: ")
If you want to get automated tracing of your model calls you can also set your LangSmith API key by uncommenting below:
# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
Installationโ
The LangChain Netmind integration lives in the langchain-netmind
package:
%pip install -qU langchain-netmind
Instantiationโ
Now we can instantiate our model object and generate chat completions:
- TODO: Update model instantiation with relevant params.
from langchain_netmind import ChatNetmind
llm = ChatNetmind(
model="model-name",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# other params...
)
Invocationโ
- TODO: Run cells so output can be seen.
messages = [
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human", "I love programming."),
]
ai_msg = llm.invoke(messages)
ai_msg
print(ai_msg.content)
Chainingโ
We can chain our model with a prompt template like so:
- TODO: Run cells so output can be seen.
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)
chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
TODO: Any functionality specific to this model providerโ
E.g. creating/using finetuned models via this provider. Delete if not relevant.
API referenceโ
For detailed documentation of all ChatNetmind features and configurations head to the API reference: API reference
Relatedโ
- Chat model conceptual guide
- Chat model how-to guides