• Pull a prompt from the hub.

    Type Parameters

    • T extends Runnable<any, any, RunnableConfig<Record<string, any>>>

    Parameters

    • ownerRepoCommit: string

      The name of the repo containing the prompt, as well as an optional commit hash separated by a slash.

    • Optionaloptions: {
          apiKey?: string;
          apiUrl?: string;
          includeModel?: boolean;
          modelClass?: (new (...args: any[]) => BaseLanguageModel<any, BaseLanguageModelCallOptions>);
      }
      • OptionalapiKey?: string

        LangSmith API key to use when pulling the prompt

      • OptionalapiUrl?: string

        LangSmith API URL to use when pulling the prompt

      • OptionalincludeModel?: boolean

        Whether to also instantiate and attach a model instance to the prompt, if the prompt has associated model metadata. If set to true, invoking the resulting pulled prompt will also invoke the instantiated model. For non-OpenAI models, you must also set "modelClass" to the correct class of the model.

      • OptionalmodelClass?: (new (...args: any[]) => BaseLanguageModel<any, BaseLanguageModelCallOptions>)

        If includeModel is true, the class of the model to instantiate. Required for non-OpenAI models. If you are running in Node or another environment that supports dynamic imports, you may instead import this function from "langchain/hub/node" and pass "includeModel: true" instead of specifying this parameter.

          • new (...args): BaseLanguageModel<any, BaseLanguageModelCallOptions>
          • Parameters

            • Rest...args: any[]

            Returns BaseLanguageModel<any, BaseLanguageModelCallOptions>

    Returns Promise<T>