用 100 行代码揭开 LLM 集成工具 LangChain 的神秘之处!-llvm rust

2023-07-27 06:57:32

 

整理 | 王子彧    责编 | 梦依丹 

出品 | CSDN(ID:CSDNnews)

LangChain 是一个强大的程序框架,它允许用户围绕大型语言模型快速构建应用程序和管道。它直接与 OpenAI 的 GPT-3 和 GPT-3.5 模型以及 Hugging Face 的开源替代品(如 Google 的 flan-t5 模型)集成。除此之外,它还提供了一套工具、组件和接口,可简化创建由大型语言模型 ( LLM ) 和聊天模型提供支持的应用程序的过程。LangChain 可以轻松管理与语言模型的交互,将多个组件链接在一起,并集成额外的资源,例如 API 和数据库。开发者可以借助它轻松打造自己的 AI 知识库。

时至今日,LLMs 接口框架 LangChain 在 GitHub 上已经收获了 3万+ 个 Star,已经成为了当下非常流行的一个工具包。

5 月 4 日,Scott Logic 首席技术官 Colin Eberhardt 发表了一篇博文。他表示,自己用 100 行代码重新来研究 LangChain,揭示了 LangChain 背后的工作原理。

"],[20,"\n","24:\"QXvs\"|36:177|direction:\"ltr\""],[20," await fetch(\"https://api.openai.com/v1/completions\", {"],[20,"\n","24:\"AjoF\"|36:177|direction:\"ltr\""],[20," method: \"POST\","],[20,"\n","24:\"sQ2d\"|36:177|direction:\"ltr\""],[20," headers: {"],[20,"\n","24:\"cKOi\"|36:177|direction:\"ltr\""],[20," \"Content-Type\": \"application/json\","],[20,"\n","24:\"PaXk\"|36:177|direction:\"ltr\""],[20," Authorization: \"Bearer \" + process.env.OPENAI_API_KEY,"],[20,"\n","24:\"Gxf0\"|36:177|direction:\"ltr\""],[20," },"],[20,"\n","24:\"ywEP\"|36:177|direction:\"ltr\""],[20," body: JSON.stringify({"],[20,"\n","24:\"Y8JO\"|36:177|direction:\"ltr\""],[20," model: \"text-davinci-003\","],[20,"\n","24:\"29CS\"|36:177|direction:\"ltr\""],[20," prompt,"],[20,"\n","24:\"EtaP\"|36:177|direction:\"ltr\""],[20," max_tokens: 256,"],[20,"\n","24:\"nuq1\"|36:177|direction:\"ltr\""],[20," temperature: 0.7,"],[20,"\n","24:\"kdl3\"|36:177|direction:\"ltr\""],[20," stream: false,"],[20,"\n","24:\"ixei\"|36:177|direction:\"ltr\""],[20," }),"],[20,"\n","24:\"NLO7\"|36:177|direction:\"ltr\""],[20," })"],[20,"\n","24:\"FK9K\"|36:177|direction:\"ltr\""],[20," .then((res) => res.json());"],[20,"\n","24:\"19X3\"|36:177|direction:\"ltr\""],[20," .then((res) => res.choices[0].text);"],[20,"\n","24:\"jWqM\"|36:177|direction:\"ltr\""],[20,"\n","24:\"tNAw\"|36:177|direction:\"ltr\""],[20,"const response = await completePrompt(promptWithQuestion);"],[20,"\n","24:\"C1s1\"|36:177|direction:\"ltr\""],[20,"console.log(response.choices[0].text);"],[20,"\n","24:\"UJ4C\"|36:177|direction:\"ltr\""],[20,"得到的结果如下:","27:\"12\""],[20,"\n","24:\"yZpl\"|direction:\"ltr\"|linespacing:\"150\""],[20,"Question: What was the high temperature in SF yesterday in Fahrenheit?"],[20,"\n","24:\"xxua\"|36:177|direction:\"ltr\""],[20,"Thought: I can try searching the answer"],[20,"\n","24:\"LL2H\"|36:177|direction:\"ltr\""],[20,"Action: search"],[20,"\n","24:\"PyUa\"|36:177|direction:\"ltr\""],[20,"Action Input: \"high temperature san francisco yesterday fahrenheit\""],[20,"\n","24:\"iwEZ\"|36:177|direction:\"ltr\""],[20,"Observation: Found an article from the San Francisco Chronicle forecasting"],[20,"\n","24:\"itB3\"|36:177|direction:\"ltr\""],[20," a high of 69 degrees"],[20,"\n","24:\"RSBu\"|36:177|direction:\"ltr\""],[20,"Thought: I can use this to determine the answer"],[20,"\n","24:\"MdUM\"|36:177|direction:\"ltr\""],[20,"Final Answer: The high temperature in SF yesterday was 69 degrees Fahrenheit."],[20,"\n","24:\"oD1H\"|36:177|direction:\"ltr\""],[20,"可以看到 GPT 已经确定了执行步骤,即应该执行搜索,使用“昨日旧金山高温华氏度”这个术语。但有意思的是,它已经提前预测出了搜索结果,给出了 69 °F 的答案。","27:\"12\""],[20,"\n","24:\"hnYu\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"令人印象深刻的是,仅仅通过简单的提示,GPT 就已经“推理”了回答这个问题的最佳方法。如果你只是直接问GPT :“昨天旧金山高温是多少?”,它会回答:”对我来说,昨天( 2019 年 8 月 28 日)旧金山的高温是 76 °F。显然,这不是昨天,但该日期报告的温度却是正确的!","27:\"12\""],[20,"\n","24:\"fvuS\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"因此,为了防止 GPT 想象整个对话,我们只需要指定一个停止序列即可。","27:\"12\""],[20," "],[20,"\n","24:\"NoFM\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"\n","24:\"p47a\"|direction:\"ltr\"|linespacing:\"150\""],[20,"搜索工具"],[20,"\n","24:\"a8hE\"|32:1|direction:\"ltr\"|linespacing:\"115\""],[20,"\n","24:\"pBVr\"|direction:\"ltr\""],[20,"在正确的位置完成停止后,现在需要创建第一个“工具”,它执行 Google 搜索。Colin Eberhardt 将使用 SerpApi 来爬取 Google,并以简单的 SON 格式提供响应。","27:\"12\""],[20,"\n","24:\"MZ0n\"|7:3|direction:\"ltr\""],[20,"下面对工具进行定义,命名为:search","27:\"12\""],[20,"\n","24:\"zQLe\"|7:3|direction:\"ltr\""],[20,"const googleSearch = async (question) =>"],[20,"\n","24:\"FFBj\"|36:177|direction:\"ltr\""],[20," await fetch("],[20,"\n","24:\"hNxl\"|36:177|direction:\"ltr\""],[20," " https:="$&q=$" quot="quot" then=">" res="res" tools="{"],[20,"\n","24:\"Z5C8\"|36:177|direction:\"ltr\""],[20,"" search:="search:" description:="description:" a="a" search="search" engine="engine" useful="useful" for="for" when="when" you="you" need="need" to="to" answer="answer" questions="questions" about="about" current="current" events="events" input="input" should="should" be="be" query="query" execute:="execute:" googlesearch="googlesearch" serpapi="serpapi" prompt="promptTemplate"],[20,"\n","24:\"g2Av\"|36:177|direction:\"ltr\""],[20,"" replace="replace" question="question" object="object" map=">" toolname="toolname" join="join" eberhardt="eberhardt" llm="llm" answerquestion="async" let="let" see="see" above="above" allow="allow" the="the" iterate="iterate" until="until" it="it" finds="finds" final="final" while="while" true="true" const="const" response="await" completeprompt="completeprompt" add="add" this="this" action="#" if="if" execute="execute" specified="specified" by="by" llms="llms" actioninput="response.match(/Action" input:="input:" result="await" observation:="observation:" else="else" return="return" answer:="answer:" was="was" temperature="temperature" in="in" newcastle="newcastle" england="england" yesterday="yesterday" colin="colin" f="f" what="what" requires="requires" looking="looking" up="up" information="information" weather="weather" maximum="maximum" yesterday:="yesterday:" at="at" pm="pm" minimum="minimum" average="average" and="and" parser="parser" from="from" calculator:="calculator:" getting="getting" of="of" math="math" expression="expression" tool="tool" valid="valid" mathematical="mathematical" that="that" could="could" executed="executed" simple="simple" calculator="calculator" is="is" square="square" root="root" i="i" use="use" now="now" know="know" c="c" high="high" sf="sf" fahrenheit="fahrenheit" same="same" value="value" celsius="celsius" find="find" san="san" francisco="francisco" history="history" previous="previous" hours="hours" convert="convert" or="or" google="google" gpt="gpt" langchain="langchain" following="following" conversation="conversation" follow="follow" rephrase="rephrase" standalone="standalone" history:="history:" question:="question:" mergetemplate="fs.readFileSync(\"merge.txt\"," merge="merge" chat="chat" with="with" new="new" mergehistory="async" await="await" main="main" loop="loop" user="user" rl="rl" can="can" help="help" console="console" q:="q:" equal="equal" world="world" record="record" solving="solving" rubiks="rubiks" cube="cube" rubik="rubik" seconds="seconds" held="held" yiheng="yiheng" china="china" robot="robot" solve="solve" faster="faster" fastest="fastest" time="time" has="has" solved="solved" who="who" made="made" created="created" would="would" an="an" human="human" expect="expect" takes="takes" person="person" three="three" research="research" confirm="confirm" confirmed="confirmed" which="which" set="set" engineer="engineer" albert="albert" beer="beer" his="his" sub1="sub1" reloaded="reloaded" researchers="researchers" realised="realised" they="they" more="more" quickly="quickly" using="using" different="different" type="type" motor="motor" their="their" best="best" mcu="mcu" film="film" critics="critics" avengers:="avengers:" endgame="endgame" plot="plot" outline="outline" thanos="thanos" decimates="decimates" planet="planet" universe="universe" remaining="remaining" avengers="avengers" must="must" out="out" way="way" bring="bring" back="back" vanquished="vanquished" allies="allies" epic="epic" showdown="showdown" die="die" stark="stark" black="black" widow="widow" vision="vision" died="died" avenger="avenger" not="not" so="so" your="your" last="last" wrong="wrong" joel="joel" spolsky="spolsky" langchain-mini="langchain-mini" data-copy-origin="https://shimo.im" style="font-size: 18px;">LangChain 主要问题循环"],[20,"\n","24:\"QXvs\"|36:177|direction:\"ltr\""],[20," await fetch(\"https://api.openai.com/v1/completions\", {"],[20,"\n","24:\"AjoF\"|36:177|direction:\"ltr\""],[20," method: \"POST\","],[20,"\n","24:\"sQ2d\"|36:177|direction:\"ltr\""],[20," headers: {"],[20,"\n","24:\"cKOi\"|36:177|direction:\"ltr\""],[20," \"Content-Type\": \"application/json\","],[20,"\n","24:\"PaXk\"|36:177|direction:\"ltr\""],[20," Authorization: \"Bearer \" + process.env.OPENAI_API_KEY,"],[20,"\n","24:\"Gxf0\"|36:177|direction:\"ltr\""],[20," },"],[20,"\n","24:\"ywEP\"|36:177|direction:\"ltr\""],[20," body: JSON.stringify({"],[20,"\n","24:\"Y8JO\"|36:177|direction:\"ltr\""],[20," model: \"text-davinci-003\","],[20,"\n","24:\"29CS\"|36:177|direction:\"ltr\""],[20," prompt,"],[20,"\n","24:\"EtaP\"|36:177|direction:\"ltr\""],[20," max_tokens: 256,"],[20,"\n","24:\"nuq1\"|36:177|direction:\"ltr\""],[20," temperature: 0.7,"],[20,"\n","24:\"kdl3\"|36:177|direction:\"ltr\""],[20," stream: false,"],[20,"\n","24:\"ixei\"|36:177|direction:\"ltr\""],[20," }),"],[20,"\n","24:\"NLO7\"|36:177|direction:\"ltr\""],[20," })"],[20,"\n","24:\"FK9K\"|36:177|direction:\"ltr\""],[20," .then((res) => res.json());"],[20,"\n","24:\"19X3\"|36:177|direction:\"ltr\""],[20," .then((res) => res.choices[0].text);"],[20,"\n","24:\"jWqM\"|36:177|direction:\"ltr\""],[20,"\n","24:\"tNAw\"|36:177|direction:\"ltr\""],[20,"const response = await completePrompt(promptWithQuestion);"],[20,"\n","24:\"C1s1\"|36:177|direction:\"ltr\""],[20,"console.log(response.choices[0].text);"],[20,"\n","24:\"UJ4C\"|36:177|direction:\"ltr\""],[20,"得到的结果如下:","27:\"12\""],[20,"\n","24:\"yZpl\"|direction:\"ltr\"|linespacing:\"150\""],[20,"Question: What was the high temperature in SF yesterday in Fahrenheit?"],[20,"\n","24:\"xxua\"|36:177|direction:\"ltr\""],[20,"Thought: I can try searching the answer"],[20,"\n","24:\"LL2H\"|36:177|direction:\"ltr\""],[20,"Action: search"],[20,"\n","24:\"PyUa\"|36:177|direction:\"ltr\""],[20,"Action Input: \"high temperature san francisco yesterday fahrenheit\""],[20,"\n","24:\"iwEZ\"|36:177|direction:\"ltr\""],[20,"Observation: Found an article from the San Francisco Chronicle forecasting"],[20,"\n","24:\"itB3\"|36:177|direction:\"ltr\""],[20," a high of 69 degrees"],[20,"\n","24:\"RSBu\"|36:177|direction:\"ltr\""],[20,"Thought: I can use this to determine the answer"],[20,"\n","24:\"MdUM\"|36:177|direction:\"ltr\""],[20,"Final Answer: The high temperature in SF yesterday was 69 degrees Fahrenheit."],[20,"\n","24:\"oD1H\"|36:177|direction:\"ltr\""],[20,"可以看到 GPT 已经确定了执行步骤,即应该执行搜索,使用“昨日旧金山高温华氏度”这个术语。但有意思的是,它已经提前预测出了搜索结果,给出了 69 °F 的答案。","27:\"12\""],[20,"\n","24:\"hnYu\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"令人印象深刻的是,仅仅通过简单的提示,GPT 就已经“推理”了回答这个问题的最佳方法。如果你只是直接问GPT :“昨天旧金山高温是多少?”,它会回答:”对我来说,昨天( 2019 年 8 月 28 日)旧金山的高温是 76 °F。显然,这不是昨天,但该日期报告的温度却是正确的!","27:\"12\""],[20,"\n","24:\"fvuS\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"因此,为了防止 GPT 想象整个对话,我们只需要指定一个停止序列即可。","27:\"12\""],[20," "],[20,"\n","24:\"NoFM\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"\n","24:\"p47a\"|direction:\"ltr\"|linespacing:\"150\""],[20,"搜索工具"],[20,"\n","24:\"a8hE\"|32:1|direction:\"ltr\"|linespacing:\"115\""],[20,"\n","24:\"pBVr\"|direction:\"ltr\""],[20,"在正确的位置完成停止后,现在需要创建第一个“工具”,它执行 Google 搜索。Colin Eberhardt 将使用 SerpApi 来爬取 Google,并以简单的 SON 格式提供响应。","27:\"12\""],[20,"\n","24:\"MZ0n\"|7:3|direction:\"ltr\""],[20,"下面对工具进行定义,命名为:search","27:\"12\""],[20,"\n","24:\"zQLe\"|7:3|direction:\"ltr\""],[20,"const googleSearch = async (question) =>"],[20,"\n","24:\"FFBj\"|36:177|direction:\"ltr\""],[20," await fetch("],[20,"\n","24:\"hNxl\"|36:177|direction:\"ltr\""],[20," " https:="$&q=$" quot="quot" then=">" res="res" tools="{"],[20,"\n","24:\"Z5C8\"|36:177|direction:\"ltr\""],[20,"" search:="search:" description:="description:" a="a" search="search" engine="engine" useful="useful" for="for" when="when" you="you" need="need" to="to" answer="answer" questions="questions" about="about" current="current" events="events" input="input" should="should" be="be" query="query" execute:="execute:" googlesearch="googlesearch" serpapi="serpapi" prompt="promptTemplate"],[20,"\n","24:\"g2Av\"|36:177|direction:\"ltr\""],[20,"" replace="replace" question="question" object="object" map=">" toolname="toolname" join="join" eberhardt="eberhardt" llm="llm" answerquestion="async" let="let" see="see" above="above" allow="allow" the="the" iterate="iterate" until="until" it="it" finds="finds" final="final" while="while" true="true" const="const" response="await" completeprompt="completeprompt" add="add" this="this" action="#" if="if" execute="execute" specified="specified" by="by" llms="llms" actioninput="response.match(/Action" input:="input:" result="await" observation:="observation:" else="else" return="return" answer:="answer:" was="was" temperature="temperature" in="in" newcastle="newcastle" england="england" yesterday="yesterday" colin="colin" f="f" what="what" requires="requires" looking="looking" up="up" information="information" weather="weather" maximum="maximum" yesterday:="yesterday:" at="at" pm="pm" minimum="minimum" average="average" and="and" parser="parser" from="from" calculator:="calculator:" getting="getting" of="of" math="math" expression="expression" tool="tool" valid="valid" mathematical="mathematical" that="that" could="could" executed="executed" simple="simple" calculator="calculator" is="is" square="square" root="root" i="i" use="use" now="now" know="know" c="c" high="high" sf="sf" fahrenheit="fahrenheit" same="same" value="value" celsius="celsius" find="find" san="san" francisco="francisco" history="history" previous="previous" hours="hours" convert="convert" or="or" google="google" gpt="gpt" langchain="langchain" following="following" conversation="conversation" follow="follow" rephrase="rephrase" standalone="standalone" history:="history:" question:="question:" mergetemplate="fs.readFileSync(\"merge.txt\"," merge="merge" chat="chat" with="with" new="new" mergehistory="async" await="await" main="main" loop="loop" user="user" rl="rl" can="can" help="help" console="console" q:="q:" equal="equal" world="world" record="record" solving="solving" rubiks="rubiks" cube="cube" rubik="rubik" seconds="seconds" held="held" yiheng="yiheng" china="china" robot="robot" solve="solve" faster="faster" fastest="fastest" time="time" has="has" solved="solved" who="who" made="made" created="created" would="would" an="an" human="human" expect="expect" takes="takes" person="person" three="three" research="research" confirm="confirm" confirmed="confirmed" which="which" set="set" engineer="engineer" albert="albert" beer="beer" his="his" sub1="sub1" reloaded="reloaded" researchers="researchers" realised="realised" they="they" more="more" quickly="quickly" using="using" different="different" type="type" motor="motor" their="their" best="best" mcu="mcu" film="film" critics="critics" avengers:="avengers:" endgame="endgame" plot="plot" outline="outline" thanos="thanos" decimates="decimates" planet="planet" universe="universe" remaining="remaining" avengers="avengers" must="must" out="out" way="way" bring="bring" back="back" vanquished="vanquished" allies="allies" epic="epic" showdown="showdown" die="die" stark="stark" black="black" widow="widow" vision="vision" died="died" avenger="avenger" not="not" so="so" your="your" last="last" wrong="wrong" joel="joel" spolsky="spolsky" langchain-mini="langchain-mini" data-copy-origin="https://shimo.im" style="font-size: 18px;">

Colin Eberhardt 表示,他最感兴趣的 LangChain 部分是其 Agent 模型。这个 API 允许你创建复杂的对话接口,并且利用各种工具(例如 Google 搜索、计算器)来回答问题。因此,成功解决了在用 LLM 回答重要问题时,所遇到的产生错误答案的倾向和缺乏最新数据等问题。

从广义上讲,使用 Agent 模型,让 LLM 成为了一个编排器。接受问题,将其分解为块,然后使用适当的工具来组合答案。

深入研究 LangChain 代码库,可以发现该流程是通过以下提示执行的:

提示分为几个部分:

明确表达总体目标“回答以下问题......”

工具列表,并简要说明其功能

用于解决问题的步骤,可能涉及迭代

问题。接下来是第一个问题,这是 GPT 将开始添加文本的位置(即完成)

Colin Eberhardt 认为第 3 部分特别有趣,它是通过一个示例(即一次性学习)“教”GPT 来充当编排器的地方。这里教的编排方法是通过思维链进行推理,将问题分解为更小的组件。研究人员发现这些组件能够提供更好的结果,并且符合推理逻辑。

这就是提示设计的艺术!

根据提示,让以下代码通过 OpenAI API 把上述提示与关于“昨天旧金山的最高温达到多少华氏度?”问题发送到 GPT-3.5:

得到的结果如下:

可以看到 GPT 已经确定了执行步骤,即应该执行搜索,使用“昨日旧金山高温华氏度”这个术语。但有意思的是,它已经提前预测出了搜索结果,给出了 69 °F 的答案。

令人印象深刻的是,仅仅通过简单的提示,GPT 就已经“推理”了回答这个问题的最佳方法。如果你只是直接问GPT :“昨天旧金山高温是多少?”,它会回答:”对我来说,昨天( 2019 年 8 月 28 日)旧金山的高温是 76 °F。显然,这不是昨天,但该日期报告的温度却是正确的!

因此,为了防止 GPT 想象整个对话,我们只需要指定一个停止序列即可。

"],[20,"\n","24:\"QXvs\"|36:177|direction:\"ltr\""],[20," await fetch(\"https://api.openai.com/v1/completions\", {"],[20,"\n","24:\"AjoF\"|36:177|direction:\"ltr\""],[20," method: \"POST\","],[20,"\n","24:\"sQ2d\"|36:177|direction:\"ltr\""],[20," headers: {"],[20,"\n","24:\"cKOi\"|36:177|direction:\"ltr\""],[20," \"Content-Type\": \"application/json\","],[20,"\n","24:\"PaXk\"|36:177|direction:\"ltr\""],[20," Authorization: \"Bearer \" + process.env.OPENAI_API_KEY,"],[20,"\n","24:\"Gxf0\"|36:177|direction:\"ltr\""],[20," },"],[20,"\n","24:\"ywEP\"|36:177|direction:\"ltr\""],[20," body: JSON.stringify({"],[20,"\n","24:\"Y8JO\"|36:177|direction:\"ltr\""],[20," model: \"text-davinci-003\","],[20,"\n","24:\"29CS\"|36:177|direction:\"ltr\""],[20," prompt,"],[20,"\n","24:\"EtaP\"|36:177|direction:\"ltr\""],[20," max_tokens: 256,"],[20,"\n","24:\"nuq1\"|36:177|direction:\"ltr\""],[20," temperature: 0.7,"],[20,"\n","24:\"kdl3\"|36:177|direction:\"ltr\""],[20," stream: false,"],[20,"\n","24:\"ixei\"|36:177|direction:\"ltr\""],[20," }),"],[20,"\n","24:\"NLO7\"|36:177|direction:\"ltr\""],[20," })"],[20,"\n","24:\"FK9K\"|36:177|direction:\"ltr\""],[20," .then((res) => res.json());"],[20,"\n","24:\"19X3\"|36:177|direction:\"ltr\""],[20," .then((res) => res.choices[0].text);"],[20,"\n","24:\"jWqM\"|36:177|direction:\"ltr\""],[20,"\n","24:\"tNAw\"|36:177|direction:\"ltr\""],[20,"const response = await completePrompt(promptWithQuestion);"],[20,"\n","24:\"C1s1\"|36:177|direction:\"ltr\""],[20,"console.log(response.choices[0].text);"],[20,"\n","24:\"UJ4C\"|36:177|direction:\"ltr\""],[20,"得到的结果如下:","27:\"12\""],[20,"\n","24:\"yZpl\"|direction:\"ltr\"|linespacing:\"150\""],[20,"Question: What was the high temperature in SF yesterday in Fahrenheit?"],[20,"\n","24:\"xxua\"|36:177|direction:\"ltr\""],[20,"Thought: I can try searching the answer"],[20,"\n","24:\"LL2H\"|36:177|direction:\"ltr\""],[20,"Action: search"],[20,"\n","24:\"PyUa\"|36:177|direction:\"ltr\""],[20,"Action Input: \"high temperature san francisco yesterday fahrenheit\""],[20,"\n","24:\"iwEZ\"|36:177|direction:\"ltr\""],[20,"Observation: Found an article from the San Francisco Chronicle forecasting"],[20,"\n","24:\"itB3\"|36:177|direction:\"ltr\""],[20," a high of 69 degrees"],[20,"\n","24:\"RSBu\"|36:177|direction:\"ltr\""],[20,"Thought: I can use this to determine the answer"],[20,"\n","24:\"MdUM\"|36:177|direction:\"ltr\""],[20,"Final Answer: The high temperature in SF yesterday was 69 degrees Fahrenheit."],[20,"\n","24:\"oD1H\"|36:177|direction:\"ltr\""],[20,"可以看到 GPT 已经确定了执行步骤,即应该执行搜索,使用“昨日旧金山高温华氏度”这个术语。但有意思的是,它已经提前预测出了搜索结果,给出了 69 °F 的答案。","27:\"12\""],[20,"\n","24:\"hnYu\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"令人印象深刻的是,仅仅通过简单的提示,GPT 就已经“推理”了回答这个问题的最佳方法。如果你只是直接问GPT :“昨天旧金山高温是多少?”,它会回答:”对我来说,昨天( 2019 年 8 月 28 日)旧金山的高温是 76 °F。显然,这不是昨天,但该日期报告的温度却是正确的!","27:\"12\""],[20,"\n","24:\"fvuS\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"因此,为了防止 GPT 想象整个对话,我们只需要指定一个停止序列即可。","27:\"12\""],[20," "],[20,"\n","24:\"NoFM\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"\n","24:\"p47a\"|direction:\"ltr\"|linespacing:\"150\""],[20,"搜索工具"],[20,"\n","24:\"a8hE\"|32:1|direction:\"ltr\"|linespacing:\"115\""],[20,"\n","24:\"pBVr\"|direction:\"ltr\""],[20,"在正确的位置完成停止后,现在需要创建第一个“工具”,它执行 Google 搜索。Colin Eberhardt 将使用 SerpApi 来爬取 Google,并以简单的 SON 格式提供响应。","27:\"12\""],[20,"\n","24:\"MZ0n\"|7:3|direction:\"ltr\""],[20,"下面对工具进行定义,命名为:search","27:\"12\""],[20,"\n","24:\"zQLe\"|7:3|direction:\"ltr\""],[20,"const googleSearch = async (question) =>"],[20,"\n","24:\"FFBj\"|36:177|direction:\"ltr\""],[20," await fetch("],[20,"\n","24:\"hNxl\"|36:177|direction:\"ltr\""],[20," " https:="$&q=$" quot="quot" then=">" res="res" tools="{"],[20,"\n","24:\"Z5C8\"|36:177|direction:\"ltr\""],[20,"" search:="search:" description:="description:" a="a" search="search" engine="engine" useful="useful" for="for" when="when" you="you" need="need" to="to" answer="answer" questions="questions" about="about" current="current" events="events" input="input" should="should" be="be" query="query" execute:="execute:" googlesearch="googlesearch" serpapi="serpapi" prompt="promptTemplate"],[20,"\n","24:\"g2Av\"|36:177|direction:\"ltr\""],[20,"" replace="replace" question="question" object="object" map=">" toolname="toolname" join="join" eberhardt="eberhardt" llm="llm" answerquestion="async" let="let" see="see" above="above" allow="allow" the="the" iterate="iterate" until="until" it="it" finds="finds" final="final" while="while" true="true" const="const" response="await" completeprompt="completeprompt" add="add" this="this" action="#" if="if" execute="execute" specified="specified" by="by" llms="llms" actioninput="response.match(/Action" input:="input:" result="await" observation:="observation:" else="else" return="return" answer:="answer:" was="was" temperature="temperature" in="in" newcastle="newcastle" england="england" yesterday="yesterday" colin="colin" f="f" what="what" requires="requires" looking="looking" up="up" information="information" weather="weather" maximum="maximum" yesterday:="yesterday:" at="at" pm="pm" minimum="minimum" average="average" and="and" parser="parser" from="from" calculator:="calculator:" getting="getting" of="of" math="math" expression="expression" tool="tool" valid="valid" mathematical="mathematical" that="that" could="could" executed="executed" simple="simple" calculator="calculator" is="is" square="square" root="root" i="i" use="use" now="now" know="know" c="c" high="high" sf="sf" fahrenheit="fahrenheit" same="same" value="value" celsius="celsius" find="find" san="san" francisco="francisco" history="history" previous="previous" hours="hours" convert="convert" or="or" google="google" gpt="gpt" langchain="langchain" following="following" conversation="conversation" follow="follow" rephrase="rephrase" standalone="standalone" history:="history:" question:="question:" mergetemplate="fs.readFileSync(\"merge.txt\"," merge="merge" chat="chat" with="with" new="new" mergehistory="async" await="await" main="main" loop="loop" user="user" rl="rl" can="can" help="help" console="console" q:="q:" equal="equal" world="world" record="record" solving="solving" rubiks="rubiks" cube="cube" rubik="rubik" seconds="seconds" held="held" yiheng="yiheng" china="china" robot="robot" solve="solve" faster="faster" fastest="fastest" time="time" has="has" solved="solved" who="who" made="made" created="created" would="would" an="an" human="human" expect="expect" takes="takes" person="person" three="three" research="research" confirm="confirm" confirmed="confirmed" which="which" set="set" engineer="engineer" albert="albert" beer="beer" his="his" sub1="sub1" reloaded="reloaded" researchers="researchers" realised="realised" they="they" more="more" quickly="quickly" using="using" different="different" type="type" motor="motor" their="their" best="best" mcu="mcu" film="film" critics="critics" avengers:="avengers:" endgame="endgame" plot="plot" outline="outline" thanos="thanos" decimates="decimates" planet="planet" universe="universe" remaining="remaining" avengers="avengers" must="must" out="out" way="way" bring="bring" back="back" vanquished="vanquished" allies="allies" epic="epic" showdown="showdown" die="die" stark="stark" black="black" widow="widow" vision="vision" died="died" avenger="avenger" not="not" so="so" your="your" last="last" wrong="wrong" joel="joel" spolsky="spolsky" langchain-mini="langchain-mini" data-copy-origin="https://shimo.im" style="font-size: 18px;">搜索工具

在正确的位置完成停止后,现在需要创建第一个“工具”,它执行 Google 搜索。Colin Eberhardt 将使用 SerpApi 来爬取 Google,并以简单的 SON 格式提供响应。

下面对工具进行定义,命名为:search

该函数使用 SerpApi,在这种情况下,主要依赖通过页面的“答案框”组件可见的结果。这是让谷歌提供答案而不仅仅是网页结果列表的一种巧妙方法。

接下来,将更新提示模板以动态添加工具:

Colin Eberhardt 想要根据给定的迭代执行工具,将结果附加到提示中。此过程将持续,直到 LLM 协调器确定它有足够的信息并返回。

下一步:

当 Colin Eberhardt 运行上述代码时,它给出的答案是“昨天纽卡斯尔(英格兰)的最高温度是 56°F,最低温度是 46°F”,完全正确。

通过查看提示迭代增长,可以看到工具调用链:

它成功地调用了搜索工具,并且从结果观察中确定它有足够的信息并能给出一个汇总的响应。

"],[20,"\n","24:\"QXvs\"|36:177|direction:\"ltr\""],[20," await fetch(\"https://api.openai.com/v1/completions\", {"],[20,"\n","24:\"AjoF\"|36:177|direction:\"ltr\""],[20," method: \"POST\","],[20,"\n","24:\"sQ2d\"|36:177|direction:\"ltr\""],[20," headers: {"],[20,"\n","24:\"cKOi\"|36:177|direction:\"ltr\""],[20," \"Content-Type\": \"application/json\","],[20,"\n","24:\"PaXk\"|36:177|direction:\"ltr\""],[20," Authorization: \"Bearer \" + process.env.OPENAI_API_KEY,"],[20,"\n","24:\"Gxf0\"|36:177|direction:\"ltr\""],[20," },"],[20,"\n","24:\"ywEP\"|36:177|direction:\"ltr\""],[20," body: JSON.stringify({"],[20,"\n","24:\"Y8JO\"|36:177|direction:\"ltr\""],[20," model: \"text-davinci-003\","],[20,"\n","24:\"29CS\"|36:177|direction:\"ltr\""],[20," prompt,"],[20,"\n","24:\"EtaP\"|36:177|direction:\"ltr\""],[20," max_tokens: 256,"],[20,"\n","24:\"nuq1\"|36:177|direction:\"ltr\""],[20," temperature: 0.7,"],[20,"\n","24:\"kdl3\"|36:177|direction:\"ltr\""],[20," stream: false,"],[20,"\n","24:\"ixei\"|36:177|direction:\"ltr\""],[20," }),"],[20,"\n","24:\"NLO7\"|36:177|direction:\"ltr\""],[20," })"],[20,"\n","24:\"FK9K\"|36:177|direction:\"ltr\""],[20," .then((res) => res.json());"],[20,"\n","24:\"19X3\"|36:177|direction:\"ltr\""],[20," .then((res) => res.choices[0].text);"],[20,"\n","24:\"jWqM\"|36:177|direction:\"ltr\""],[20,"\n","24:\"tNAw\"|36:177|direction:\"ltr\""],[20,"const response = await completePrompt(promptWithQuestion);"],[20,"\n","24:\"C1s1\"|36:177|direction:\"ltr\""],[20,"console.log(response.choices[0].text);"],[20,"\n","24:\"UJ4C\"|36:177|direction:\"ltr\""],[20,"得到的结果如下:","27:\"12\""],[20,"\n","24:\"yZpl\"|direction:\"ltr\"|linespacing:\"150\""],[20,"Question: What was the high temperature in SF yesterday in Fahrenheit?"],[20,"\n","24:\"xxua\"|36:177|direction:\"ltr\""],[20,"Thought: I can try searching the answer"],[20,"\n","24:\"LL2H\"|36:177|direction:\"ltr\""],[20,"Action: search"],[20,"\n","24:\"PyUa\"|36:177|direction:\"ltr\""],[20,"Action Input: \"high temperature san francisco yesterday fahrenheit\""],[20,"\n","24:\"iwEZ\"|36:177|direction:\"ltr\""],[20,"Observation: Found an article from the San Francisco Chronicle forecasting"],[20,"\n","24:\"itB3\"|36:177|direction:\"ltr\""],[20," a high of 69 degrees"],[20,"\n","24:\"RSBu\"|36:177|direction:\"ltr\""],[20,"Thought: I can use this to determine the answer"],[20,"\n","24:\"MdUM\"|36:177|direction:\"ltr\""],[20,"Final Answer: The high temperature in SF yesterday was 69 degrees Fahrenheit."],[20,"\n","24:\"oD1H\"|36:177|direction:\"ltr\""],[20,"可以看到 GPT 已经确定了执行步骤,即应该执行搜索,使用“昨日旧金山高温华氏度”这个术语。但有意思的是,它已经提前预测出了搜索结果,给出了 69 °F 的答案。","27:\"12\""],[20,"\n","24:\"hnYu\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"令人印象深刻的是,仅仅通过简单的提示,GPT 就已经“推理”了回答这个问题的最佳方法。如果你只是直接问GPT :“昨天旧金山高温是多少?”,它会回答:”对我来说,昨天( 2019 年 8 月 28 日)旧金山的高温是 76 °F。显然,这不是昨天,但该日期报告的温度却是正确的!","27:\"12\""],[20,"\n","24:\"fvuS\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"因此,为了防止 GPT 想象整个对话,我们只需要指定一个停止序列即可。","27:\"12\""],[20," "],[20,"\n","24:\"NoFM\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"\n","24:\"p47a\"|direction:\"ltr\"|linespacing:\"150\""],[20,"搜索工具"],[20,"\n","24:\"a8hE\"|32:1|direction:\"ltr\"|linespacing:\"115\""],[20,"\n","24:\"pBVr\"|direction:\"ltr\""],[20,"在正确的位置完成停止后,现在需要创建第一个“工具”,它执行 Google 搜索。Colin Eberhardt 将使用 SerpApi 来爬取 Google,并以简单的 SON 格式提供响应。","27:\"12\""],[20,"\n","24:\"MZ0n\"|7:3|direction:\"ltr\""],[20,"下面对工具进行定义,命名为:search","27:\"12\""],[20,"\n","24:\"zQLe\"|7:3|direction:\"ltr\""],[20,"const googleSearch = async (question) =>"],[20,"\n","24:\"FFBj\"|36:177|direction:\"ltr\""],[20," await fetch("],[20,"\n","24:\"hNxl\"|36:177|direction:\"ltr\""],[20," " https:="$&q=$" quot="quot" then=">" res="res" tools="{"],[20,"\n","24:\"Z5C8\"|36:177|direction:\"ltr\""],[20,"" search:="search:" description:="description:" a="a" search="search" engine="engine" useful="useful" for="for" when="when" you="you" need="need" to="to" answer="answer" questions="questions" about="about" current="current" events="events" input="input" should="should" be="be" query="query" execute:="execute:" googlesearch="googlesearch" serpapi="serpapi" prompt="promptTemplate"],[20,"\n","24:\"g2Av\"|36:177|direction:\"ltr\""],[20,"" replace="replace" question="question" object="object" map=">" toolname="toolname" join="join" eberhardt="eberhardt" llm="llm" answerquestion="async" let="let" see="see" above="above" allow="allow" the="the" iterate="iterate" until="until" it="it" finds="finds" final="final" while="while" true="true" const="const" response="await" completeprompt="completeprompt" add="add" this="this" action="#" if="if" execute="execute" specified="specified" by="by" llms="llms" actioninput="response.match(/Action" input:="input:" result="await" observation:="observation:" else="else" return="return" answer:="answer:" was="was" temperature="temperature" in="in" newcastle="newcastle" england="england" yesterday="yesterday" colin="colin" f="f" what="what" requires="requires" looking="looking" up="up" information="information" weather="weather" maximum="maximum" yesterday:="yesterday:" at="at" pm="pm" minimum="minimum" average="average" and="and" parser="parser" from="from" calculator:="calculator:" getting="getting" of="of" math="math" expression="expression" tool="tool" valid="valid" mathematical="mathematical" that="that" could="could" executed="executed" simple="simple" calculator="calculator" is="is" square="square" root="root" i="i" use="use" now="now" know="know" c="c" high="high" sf="sf" fahrenheit="fahrenheit" same="same" value="value" celsius="celsius" find="find" san="san" francisco="francisco" history="history" previous="previous" hours="hours" convert="convert" or="or" google="google" gpt="gpt" langchain="langchain" following="following" conversation="conversation" follow="follow" rephrase="rephrase" standalone="standalone" history:="history:" question:="question:" mergetemplate="fs.readFileSync(\"merge.txt\"," merge="merge" chat="chat" with="with" new="new" mergehistory="async" await="await" main="main" loop="loop" user="user" rl="rl" can="can" help="help" console="console" q:="q:" equal="equal" world="world" record="record" solving="solving" rubiks="rubiks" cube="cube" rubik="rubik" seconds="seconds" held="held" yiheng="yiheng" china="china" robot="robot" solve="solve" faster="faster" fastest="fastest" time="time" has="has" solved="solved" who="who" made="made" created="created" would="would" an="an" human="human" expect="expect" takes="takes" person="person" three="three" research="research" confirm="confirm" confirmed="confirmed" which="which" set="set" engineer="engineer" albert="albert" beer="beer" his="his" sub1="sub1" reloaded="reloaded" researchers="researchers" realised="realised" they="they" more="more" quickly="quickly" using="using" different="different" type="type" motor="motor" their="their" best="best" mcu="mcu" film="film" critics="critics" avengers:="avengers:" endgame="endgame" plot="plot" outline="outline" thanos="thanos" decimates="decimates" planet="planet" universe="universe" remaining="remaining" avengers="avengers" must="must" out="out" way="way" bring="bring" back="back" vanquished="vanquished" allies="allies" epic="epic" showdown="showdown" die="die" stark="stark" black="black" widow="widow" vision="vision" died="died" avenger="avenger" not="not" so="so" your="your" last="last" wrong="wrong" joel="joel" spolsky="spolsky" langchain-mini="langchain-mini" data-copy-origin="https://shimo.im" style="font-size: 18px;">计算器工具"],[20,"\n","24:\"QXvs\"|36:177|direction:\"ltr\""],[20," await fetch(\"https://api.openai.com/v1/completions\", {"],[20,"\n","24:\"AjoF\"|36:177|direction:\"ltr\""],[20," method: \"POST\","],[20,"\n","24:\"sQ2d\"|36:177|direction:\"ltr\""],[20," headers: {"],[20,"\n","24:\"cKOi\"|36:177|direction:\"ltr\""],[20," \"Content-Type\": \"application/json\","],[20,"\n","24:\"PaXk\"|36:177|direction:\"ltr\""],[20," Authorization: \"Bearer \" + process.env.OPENAI_API_KEY,"],[20,"\n","24:\"Gxf0\"|36:177|direction:\"ltr\""],[20," },"],[20,"\n","24:\"ywEP\"|36:177|direction:\"ltr\""],[20," body: JSON.stringify({"],[20,"\n","24:\"Y8JO\"|36:177|direction:\"ltr\""],[20," model: \"text-davinci-003\","],[20,"\n","24:\"29CS\"|36:177|direction:\"ltr\""],[20," prompt,"],[20,"\n","24:\"EtaP\"|36:177|direction:\"ltr\""],[20," max_tokens: 256,"],[20,"\n","24:\"nuq1\"|36:177|direction:\"ltr\""],[20," temperature: 0.7,"],[20,"\n","24:\"kdl3\"|36:177|direction:\"ltr\""],[20," stream: false,"],[20,"\n","24:\"ixei\"|36:177|direction:\"ltr\""],[20," }),"],[20,"\n","24:\"NLO7\"|36:177|direction:\"ltr\""],[20," })"],[20,"\n","24:\"FK9K\"|36:177|direction:\"ltr\""],[20," .then((res) => res.json());"],[20,"\n","24:\"19X3\"|36:177|direction:\"ltr\""],[20," .then((res) => res.choices[0].text);"],[20,"\n","24:\"jWqM\"|36:177|direction:\"ltr\""],[20,"\n","24:\"tNAw\"|36:177|direction:\"ltr\""],[20,"const response = await completePrompt(promptWithQuestion);"],[20,"\n","24:\"C1s1\"|36:177|direction:\"ltr\""],[20,"console.log(response.choices[0].text);"],[20,"\n","24:\"UJ4C\"|36:177|direction:\"ltr\""],[20,"得到的结果如下:","27:\"12\""],[20,"\n","24:\"yZpl\"|direction:\"ltr\"|linespacing:\"150\""],[20,"Question: What was the high temperature in SF yesterday in Fahrenheit?"],[20,"\n","24:\"xxua\"|36:177|direction:\"ltr\""],[20,"Thought: I can try searching the answer"],[20,"\n","24:\"LL2H\"|36:177|direction:\"ltr\""],[20,"Action: search"],[20,"\n","24:\"PyUa\"|36:177|direction:\"ltr\""],[20,"Action Input: \"high temperature san francisco yesterday fahrenheit\""],[20,"\n","24:\"iwEZ\"|36:177|direction:\"ltr\""],[20,"Observation: Found an article from the San Francisco Chronicle forecasting"],[20,"\n","24:\"itB3\"|36:177|direction:\"ltr\""],[20," a high of 69 degrees"],[20,"\n","24:\"RSBu\"|36:177|direction:\"ltr\""],[20,"Thought: I can use this to determine the answer"],[20,"\n","24:\"MdUM\"|36:177|direction:\"ltr\""],[20,"Final Answer: The high temperature in SF yesterday was 69 degrees Fahrenheit."],[20,"\n","24:\"oD1H\"|36:177|direction:\"ltr\""],[20,"可以看到 GPT 已经确定了执行步骤,即应该执行搜索,使用“昨日旧金山高温华氏度”这个术语。但有意思的是,它已经提前预测出了搜索结果,给出了 69 °F 的答案。","27:\"12\""],[20,"\n","24:\"hnYu\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"令人印象深刻的是,仅仅通过简单的提示,GPT 就已经“推理”了回答这个问题的最佳方法。如果你只是直接问GPT :“昨天旧金山高温是多少?”,它会回答:”对我来说,昨天( 2019 年 8 月 28 日)旧金山的高温是 76 °F。显然,这不是昨天,但该日期报告的温度却是正确的!","27:\"12\""],[20,"\n","24:\"fvuS\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"因此,为了防止 GPT 想象整个对话,我们只需要指定一个停止序列即可。","27:\"12\""],[20," "],[20,"\n","24:\"NoFM\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"\n","24:\"p47a\"|direction:\"ltr\"|linespacing:\"150\""],[20,"搜索工具"],[20,"\n","24:\"a8hE\"|32:1|direction:\"ltr\"|linespacing:\"115\""],[20,"\n","24:\"pBVr\"|direction:\"ltr\""],[20,"在正确的位置完成停止后,现在需要创建第一个“工具”,它执行 Google 搜索。Colin Eberhardt 将使用 SerpApi 来爬取 Google,并以简单的 SON 格式提供响应。","27:\"12\""],[20,"\n","24:\"MZ0n\"|7:3|direction:\"ltr\""],[20,"下面对工具进行定义,命名为:search","27:\"12\""],[20,"\n","24:\"zQLe\"|7:3|direction:\"ltr\""],[20,"const googleSearch = async (question) =>"],[20,"\n","24:\"FFBj\"|36:177|direction:\"ltr\""],[20," await fetch("],[20,"\n","24:\"hNxl\"|36:177|direction:\"ltr\""],[20," " https:="$&q=$" quot="quot" then=">" res="res" tools="{"],[20,"\n","24:\"Z5C8\"|36:177|direction:\"ltr\""],[20,"" search:="search:" description:="description:" a="a" search="search" engine="engine" useful="useful" for="for" when="when" you="you" need="need" to="to" answer="answer" questions="questions" about="about" current="current" events="events" input="input" should="should" be="be" query="query" execute:="execute:" googlesearch="googlesearch" serpapi="serpapi" prompt="promptTemplate"],[20,"\n","24:\"g2Av\"|36:177|direction:\"ltr\""],[20,"" replace="replace" question="question" object="object" map=">" toolname="toolname" join="join" eberhardt="eberhardt" llm="llm" answerquestion="async" let="let" see="see" above="above" allow="allow" the="the" iterate="iterate" until="until" it="it" finds="finds" final="final" while="while" true="true" const="const" response="await" completeprompt="completeprompt" add="add" this="this" action="#" if="if" execute="execute" specified="specified" by="by" llms="llms" actioninput="response.match(/Action" input:="input:" result="await" observation:="observation:" else="else" return="return" answer:="answer:" was="was" temperature="temperature" in="in" newcastle="newcastle" england="england" yesterday="yesterday" colin="colin" f="f" what="what" requires="requires" looking="looking" up="up" information="information" weather="weather" maximum="maximum" yesterday:="yesterday:" at="at" pm="pm" minimum="minimum" average="average" and="and" parser="parser" from="from" calculator:="calculator:" getting="getting" of="of" math="math" expression="expression" tool="tool" valid="valid" mathematical="mathematical" that="that" could="could" executed="executed" simple="simple" calculator="calculator" is="is" square="square" root="root" i="i" use="use" now="now" know="know" c="c" high="high" sf="sf" fahrenheit="fahrenheit" same="same" value="value" celsius="celsius" find="find" san="san" francisco="francisco" history="history" previous="previous" hours="hours" convert="convert" or="or" google="google" gpt="gpt" langchain="langchain" following="following" conversation="conversation" follow="follow" rephrase="rephrase" standalone="standalone" history:="history:" question:="question:" mergetemplate="fs.readFileSync(\"merge.txt\"," merge="merge" chat="chat" with="with" new="new" mergehistory="async" await="await" main="main" loop="loop" user="user" rl="rl" can="can" help="help" console="console" q:="q:" equal="equal" world="world" record="record" solving="solving" rubiks="rubiks" cube="cube" rubik="rubik" seconds="seconds" held="held" yiheng="yiheng" china="china" robot="robot" solve="solve" faster="faster" fastest="fastest" time="time" has="has" solved="solved" who="who" made="made" created="created" would="would" an="an" human="human" expect="expect" takes="takes" person="person" three="three" research="research" confirm="confirm" confirmed="confirmed" which="which" set="set" engineer="engineer" albert="albert" beer="beer" his="his" sub1="sub1" reloaded="reloaded" researchers="researchers" realised="realised" they="they" more="more" quickly="quickly" using="using" different="different" type="type" motor="motor" their="their" best="best" mcu="mcu" film="film" critics="critics" avengers:="avengers:" endgame="endgame" plot="plot" outline="outline" thanos="thanos" decimates="decimates" planet="planet" universe="universe" remaining="remaining" avengers="avengers" must="must" out="out" way="way" bring="bring" back="back" vanquished="vanquished" allies="allies" epic="epic" showdown="showdown" die="die" stark="stark" black="black" widow="widow" vision="vision" died="died" avenger="avenger" not="not" so="so" your="your" last="last" wrong="wrong" joel="joel" spolsky="spolsky" langchain-mini="langchain-mini" data-copy-origin="https://shimo.im" style="font-size: 18px;">

Colin Eberhardt 认为可以通过添加计算器工具来使其更强大:

使用 expr-eval 模块完成所有复杂工作,这是一个简单的添加,现在可以做一些数学运算。同样,需要再次查看提示来了解内部工作原理,而不仅仅是查看结果:

在这里,LLM 已成功确定这个问题需要计算器。它还发现,对于计算器来说,“ 25 的平方根”通常表示为“25^(1/2)”,从而达到预期的结果。

当然,现在可以提出需要同时搜索网络和计算的问题。当被问及“昨天旧金山高温是多少华氏度?或者是多少摄氏度?“它能正确回答,”昨天,旧金山的高温是 54°F 或 12.2° C。

让我们看看它是如何实现这一点的:

在第一次迭代中,它像以前一样执行 Google  搜索。但它没有给出最终答案,而是推断它需要将这个温度转换为摄氏度。有趣的是,LLM已经知道这种转换的公式,使它能够立即应用计算器。最终答案被正确地总结——请注意摄氏值合理进行了四舍五入。

这里仅有约 80 行代码,但实现的功能让人印象深刻。Colin Eberhardt 表示,我们可以做到的远不止于此。

"],[20,"\n","24:\"QXvs\"|36:177|direction:\"ltr\""],[20," await fetch(\"https://api.openai.com/v1/completions\", {"],[20,"\n","24:\"AjoF\"|36:177|direction:\"ltr\""],[20," method: \"POST\","],[20,"\n","24:\"sQ2d\"|36:177|direction:\"ltr\""],[20," headers: {"],[20,"\n","24:\"cKOi\"|36:177|direction:\"ltr\""],[20," \"Content-Type\": \"application/json\","],[20,"\n","24:\"PaXk\"|36:177|direction:\"ltr\""],[20," Authorization: \"Bearer \" + process.env.OPENAI_API_KEY,"],[20,"\n","24:\"Gxf0\"|36:177|direction:\"ltr\""],[20," },"],[20,"\n","24:\"ywEP\"|36:177|direction:\"ltr\""],[20," body: JSON.stringify({"],[20,"\n","24:\"Y8JO\"|36:177|direction:\"ltr\""],[20," model: \"text-davinci-003\","],[20,"\n","24:\"29CS\"|36:177|direction:\"ltr\""],[20," prompt,"],[20,"\n","24:\"EtaP\"|36:177|direction:\"ltr\""],[20," max_tokens: 256,"],[20,"\n","24:\"nuq1\"|36:177|direction:\"ltr\""],[20," temperature: 0.7,"],[20,"\n","24:\"kdl3\"|36:177|direction:\"ltr\""],[20," stream: false,"],[20,"\n","24:\"ixei\"|36:177|direction:\"ltr\""],[20," }),"],[20,"\n","24:\"NLO7\"|36:177|direction:\"ltr\""],[20," })"],[20,"\n","24:\"FK9K\"|36:177|direction:\"ltr\""],[20," .then((res) => res.json());"],[20,"\n","24:\"19X3\"|36:177|direction:\"ltr\""],[20," .then((res) => res.choices[0].text);"],[20,"\n","24:\"jWqM\"|36:177|direction:\"ltr\""],[20,"\n","24:\"tNAw\"|36:177|direction:\"ltr\""],[20,"const response = await completePrompt(promptWithQuestion);"],[20,"\n","24:\"C1s1\"|36:177|direction:\"ltr\""],[20,"console.log(response.choices[0].text);"],[20,"\n","24:\"UJ4C\"|36:177|direction:\"ltr\""],[20,"得到的结果如下:","27:\"12\""],[20,"\n","24:\"yZpl\"|direction:\"ltr\"|linespacing:\"150\""],[20,"Question: What was the high temperature in SF yesterday in Fahrenheit?"],[20,"\n","24:\"xxua\"|36:177|direction:\"ltr\""],[20,"Thought: I can try searching the answer"],[20,"\n","24:\"LL2H\"|36:177|direction:\"ltr\""],[20,"Action: search"],[20,"\n","24:\"PyUa\"|36:177|direction:\"ltr\""],[20,"Action Input: \"high temperature san francisco yesterday fahrenheit\""],[20,"\n","24:\"iwEZ\"|36:177|direction:\"ltr\""],[20,"Observation: Found an article from the San Francisco Chronicle forecasting"],[20,"\n","24:\"itB3\"|36:177|direction:\"ltr\""],[20," a high of 69 degrees"],[20,"\n","24:\"RSBu\"|36:177|direction:\"ltr\""],[20,"Thought: I can use this to determine the answer"],[20,"\n","24:\"MdUM\"|36:177|direction:\"ltr\""],[20,"Final Answer: The high temperature in SF yesterday was 69 degrees Fahrenheit."],[20,"\n","24:\"oD1H\"|36:177|direction:\"ltr\""],[20,"可以看到 GPT 已经确定了执行步骤,即应该执行搜索,使用“昨日旧金山高温华氏度”这个术语。但有意思的是,它已经提前预测出了搜索结果,给出了 69 °F 的答案。","27:\"12\""],[20,"\n","24:\"hnYu\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"令人印象深刻的是,仅仅通过简单的提示,GPT 就已经“推理”了回答这个问题的最佳方法。如果你只是直接问GPT :“昨天旧金山高温是多少?”,它会回答:”对我来说,昨天( 2019 年 8 月 28 日)旧金山的高温是 76 °F。显然,这不是昨天,但该日期报告的温度却是正确的!","27:\"12\""],[20,"\n","24:\"fvuS\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"因此,为了防止 GPT 想象整个对话,我们只需要指定一个停止序列即可。","27:\"12\""],[20," "],[20,"\n","24:\"NoFM\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"\n","24:\"p47a\"|direction:\"ltr\"|linespacing:\"150\""],[20,"搜索工具"],[20,"\n","24:\"a8hE\"|32:1|direction:\"ltr\"|linespacing:\"115\""],[20,"\n","24:\"pBVr\"|direction:\"ltr\""],[20,"在正确的位置完成停止后,现在需要创建第一个“工具”,它执行 Google 搜索。Colin Eberhardt 将使用 SerpApi 来爬取 Google,并以简单的 SON 格式提供响应。","27:\"12\""],[20,"\n","24:\"MZ0n\"|7:3|direction:\"ltr\""],[20,"下面对工具进行定义,命名为:search","27:\"12\""],[20,"\n","24:\"zQLe\"|7:3|direction:\"ltr\""],[20,"const googleSearch = async (question) =>"],[20,"\n","24:\"FFBj\"|36:177|direction:\"ltr\""],[20," await fetch("],[20,"\n","24:\"hNxl\"|36:177|direction:\"ltr\""],[20," " https:="$&q=$" quot="quot" then=">" res="res" tools="{"],[20,"\n","24:\"Z5C8\"|36:177|direction:\"ltr\""],[20,"" search:="search:" description:="description:" a="a" search="search" engine="engine" useful="useful" for="for" when="when" you="you" need="need" to="to" answer="answer" questions="questions" about="about" current="current" events="events" input="input" should="should" be="be" query="query" execute:="execute:" googlesearch="googlesearch" serpapi="serpapi" prompt="promptTemplate"],[20,"\n","24:\"g2Av\"|36:177|direction:\"ltr\""],[20,"" replace="replace" question="question" object="object" map=">" toolname="toolname" join="join" eberhardt="eberhardt" llm="llm" answerquestion="async" let="let" see="see" above="above" allow="allow" the="the" iterate="iterate" until="until" it="it" finds="finds" final="final" while="while" true="true" const="const" response="await" completeprompt="completeprompt" add="add" this="this" action="#" if="if" execute="execute" specified="specified" by="by" llms="llms" actioninput="response.match(/Action" input:="input:" result="await" observation:="observation:" else="else" return="return" answer:="answer:" was="was" temperature="temperature" in="in" newcastle="newcastle" england="england" yesterday="yesterday" colin="colin" f="f" what="what" requires="requires" looking="looking" up="up" information="information" weather="weather" maximum="maximum" yesterday:="yesterday:" at="at" pm="pm" minimum="minimum" average="average" and="and" parser="parser" from="from" calculator:="calculator:" getting="getting" of="of" math="math" expression="expression" tool="tool" valid="valid" mathematical="mathematical" that="that" could="could" executed="executed" simple="simple" calculator="calculator" is="is" square="square" root="root" i="i" use="use" now="now" know="know" c="c" high="high" sf="sf" fahrenheit="fahrenheit" same="same" value="value" celsius="celsius" find="find" san="san" francisco="francisco" history="history" previous="previous" hours="hours" convert="convert" or="or" google="google" gpt="gpt" langchain="langchain" following="following" conversation="conversation" follow="follow" rephrase="rephrase" standalone="standalone" history:="history:" question:="question:" mergetemplate="fs.readFileSync(\"merge.txt\"," merge="merge" chat="chat" with="with" new="new" mergehistory="async" await="await" main="main" loop="loop" user="user" rl="rl" can="can" help="help" console="console" q:="q:" equal="equal" world="world" record="record" solving="solving" rubiks="rubiks" cube="cube" rubik="rubik" seconds="seconds" held="held" yiheng="yiheng" china="china" robot="robot" solve="solve" faster="faster" fastest="fastest" time="time" has="has" solved="solved" who="who" made="made" created="created" would="would" an="an" human="human" expect="expect" takes="takes" person="person" three="three" research="research" confirm="confirm" confirmed="confirmed" which="which" set="set" engineer="engineer" albert="albert" beer="beer" his="his" sub1="sub1" reloaded="reloaded" researchers="researchers" realised="realised" they="they" more="more" quickly="quickly" using="using" different="different" type="type" motor="motor" their="their" best="best" mcu="mcu" film="film" critics="critics" avengers:="avengers:" endgame="endgame" plot="plot" outline="outline" thanos="thanos" decimates="decimates" planet="planet" universe="universe" remaining="remaining" avengers="avengers" must="must" out="out" way="way" bring="bring" back="back" vanquished="vanquished" allies="allies" epic="epic" showdown="showdown" die="die" stark="stark" black="black" widow="widow" vision="vision" died="died" avenger="avenger" not="not" so="so" your="your" last="last" wrong="wrong" joel="joel" spolsky="spolsky" langchain-mini="langchain-mini" data-copy-origin="https://shimo.im" style="font-size: 18px;">对话界面"],[20,"\n","24:\"QXvs\"|36:177|direction:\"ltr\""],[20," await fetch(\"https://api.openai.com/v1/completions\", {"],[20,"\n","24:\"AjoF\"|36:177|direction:\"ltr\""],[20," method: \"POST\","],[20,"\n","24:\"sQ2d\"|36:177|direction:\"ltr\""],[20," headers: {"],[20,"\n","24:\"cKOi\"|36:177|direction:\"ltr\""],[20," \"Content-Type\": \"application/json\","],[20,"\n","24:\"PaXk\"|36:177|direction:\"ltr\""],[20," Authorization: \"Bearer \" + process.env.OPENAI_API_KEY,"],[20,"\n","24:\"Gxf0\"|36:177|direction:\"ltr\""],[20," },"],[20,"\n","24:\"ywEP\"|36:177|direction:\"ltr\""],[20," body: JSON.stringify({"],[20,"\n","24:\"Y8JO\"|36:177|direction:\"ltr\""],[20," model: \"text-davinci-003\","],[20,"\n","24:\"29CS\"|36:177|direction:\"ltr\""],[20," prompt,"],[20,"\n","24:\"EtaP\"|36:177|direction:\"ltr\""],[20," max_tokens: 256,"],[20,"\n","24:\"nuq1\"|36:177|direction:\"ltr\""],[20," temperature: 0.7,"],[20,"\n","24:\"kdl3\"|36:177|direction:\"ltr\""],[20," stream: false,"],[20,"\n","24:\"ixei\"|36:177|direction:\"ltr\""],[20," }),"],[20,"\n","24:\"NLO7\"|36:177|direction:\"ltr\""],[20," })"],[20,"\n","24:\"FK9K\"|36:177|direction:\"ltr\""],[20," .then((res) => res.json());"],[20,"\n","24:\"19X3\"|36:177|direction:\"ltr\""],[20," .then((res) => res.choices[0].text);"],[20,"\n","24:\"jWqM\"|36:177|direction:\"ltr\""],[20,"\n","24:\"tNAw\"|36:177|direction:\"ltr\""],[20,"const response = await completePrompt(promptWithQuestion);"],[20,"\n","24:\"C1s1\"|36:177|direction:\"ltr\""],[20,"console.log(response.choices[0].text);"],[20,"\n","24:\"UJ4C\"|36:177|direction:\"ltr\""],[20,"得到的结果如下:","27:\"12\""],[20,"\n","24:\"yZpl\"|direction:\"ltr\"|linespacing:\"150\""],[20,"Question: What was the high temperature in SF yesterday in Fahrenheit?"],[20,"\n","24:\"xxua\"|36:177|direction:\"ltr\""],[20,"Thought: I can try searching the answer"],[20,"\n","24:\"LL2H\"|36:177|direction:\"ltr\""],[20,"Action: search"],[20,"\n","24:\"PyUa\"|36:177|direction:\"ltr\""],[20,"Action Input: \"high temperature san francisco yesterday fahrenheit\""],[20,"\n","24:\"iwEZ\"|36:177|direction:\"ltr\""],[20,"Observation: Found an article from the San Francisco Chronicle forecasting"],[20,"\n","24:\"itB3\"|36:177|direction:\"ltr\""],[20," a high of 69 degrees"],[20,"\n","24:\"RSBu\"|36:177|direction:\"ltr\""],[20,"Thought: I can use this to determine the answer"],[20,"\n","24:\"MdUM\"|36:177|direction:\"ltr\""],[20,"Final Answer: The high temperature in SF yesterday was 69 degrees Fahrenheit."],[20,"\n","24:\"oD1H\"|36:177|direction:\"ltr\""],[20,"可以看到 GPT 已经确定了执行步骤,即应该执行搜索,使用“昨日旧金山高温华氏度”这个术语。但有意思的是,它已经提前预测出了搜索结果,给出了 69 °F 的答案。","27:\"12\""],[20,"\n","24:\"hnYu\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"令人印象深刻的是,仅仅通过简单的提示,GPT 就已经“推理”了回答这个问题的最佳方法。如果你只是直接问GPT :“昨天旧金山高温是多少?”,它会回答:”对我来说,昨天( 2019 年 8 月 28 日)旧金山的高温是 76 °F。显然,这不是昨天,但该日期报告的温度却是正确的!","27:\"12\""],[20,"\n","24:\"fvuS\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"因此,为了防止 GPT 想象整个对话,我们只需要指定一个停止序列即可。","27:\"12\""],[20," "],[20,"\n","24:\"NoFM\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"\n","24:\"p47a\"|direction:\"ltr\"|linespacing:\"150\""],[20,"搜索工具"],[20,"\n","24:\"a8hE\"|32:1|direction:\"ltr\"|linespacing:\"115\""],[20,"\n","24:\"pBVr\"|direction:\"ltr\""],[20,"在正确的位置完成停止后,现在需要创建第一个“工具”,它执行 Google 搜索。Colin Eberhardt 将使用 SerpApi 来爬取 Google,并以简单的 SON 格式提供响应。","27:\"12\""],[20,"\n","24:\"MZ0n\"|7:3|direction:\"ltr\""],[20,"下面对工具进行定义,命名为:search","27:\"12\""],[20,"\n","24:\"zQLe\"|7:3|direction:\"ltr\""],[20,"const googleSearch = async (question) =>"],[20,"\n","24:\"FFBj\"|36:177|direction:\"ltr\""],[20," await fetch("],[20,"\n","24:\"hNxl\"|36:177|direction:\"ltr\""],[20," " https:="$&q=$" quot="quot" then=">" res="res" tools="{"],[20,"\n","24:\"Z5C8\"|36:177|direction:\"ltr\""],[20,"" search:="search:" description:="description:" a="a" search="search" engine="engine" useful="useful" for="for" when="when" you="you" need="need" to="to" answer="answer" questions="questions" about="about" current="current" events="events" input="input" should="should" be="be" query="query" execute:="execute:" googlesearch="googlesearch" serpapi="serpapi" prompt="promptTemplate"],[20,"\n","24:\"g2Av\"|36:177|direction:\"ltr\""],[20,"" replace="replace" question="question" object="object" map=">" toolname="toolname" join="join" eberhardt="eberhardt" llm="llm" answerquestion="async" let="let" see="see" above="above" allow="allow" the="the" iterate="iterate" until="until" it="it" finds="finds" final="final" while="while" true="true" const="const" response="await" completeprompt="completeprompt" add="add" this="this" action="#" if="if" execute="execute" specified="specified" by="by" llms="llms" actioninput="response.match(/Action" input:="input:" result="await" observation:="observation:" else="else" return="return" answer:="answer:" was="was" temperature="temperature" in="in" newcastle="newcastle" england="england" yesterday="yesterday" colin="colin" f="f" what="what" requires="requires" looking="looking" up="up" information="information" weather="weather" maximum="maximum" yesterday:="yesterday:" at="at" pm="pm" minimum="minimum" average="average" and="and" parser="parser" from="from" calculator:="calculator:" getting="getting" of="of" math="math" expression="expression" tool="tool" valid="valid" mathematical="mathematical" that="that" could="could" executed="executed" simple="simple" calculator="calculator" is="is" square="square" root="root" i="i" use="use" now="now" know="know" c="c" high="high" sf="sf" fahrenheit="fahrenheit" same="same" value="value" celsius="celsius" find="find" san="san" francisco="francisco" history="history" previous="previous" hours="hours" convert="convert" or="or" google="google" gpt="gpt" langchain="langchain" following="following" conversation="conversation" follow="follow" rephrase="rephrase" standalone="standalone" history:="history:" question:="question:" mergetemplate="fs.readFileSync(\"merge.txt\"," merge="merge" chat="chat" with="with" new="new" mergehistory="async" await="await" main="main" loop="loop" user="user" rl="rl" can="can" help="help" console="console" q:="q:" equal="equal" world="world" record="record" solving="solving" rubiks="rubiks" cube="cube" rubik="rubik" seconds="seconds" held="held" yiheng="yiheng" china="china" robot="robot" solve="solve" faster="faster" fastest="fastest" time="time" has="has" solved="solved" who="who" made="made" created="created" would="would" an="an" human="human" expect="expect" takes="takes" person="person" three="three" research="research" confirm="confirm" confirmed="confirmed" which="which" set="set" engineer="engineer" albert="albert" beer="beer" his="his" sub1="sub1" reloaded="reloaded" researchers="researchers" realised="realised" they="they" more="more" quickly="quickly" using="using" different="different" type="type" motor="motor" their="their" best="best" mcu="mcu" film="film" critics="critics" avengers:="avengers:" endgame="endgame" plot="plot" outline="outline" thanos="thanos" decimates="decimates" planet="planet" universe="universe" remaining="remaining" avengers="avengers" must="must" out="out" way="way" bring="bring" back="back" vanquished="vanquished" allies="allies" epic="epic" showdown="showdown" die="die" stark="stark" black="black" widow="widow" vision="vision" died="died" avenger="avenger" not="not" so="so" your="your" last="last" wrong="wrong" joel="joel" spolsky="spolsky" langchain-mini="langchain-mini" data-copy-origin="https://shimo.im" style="font-size: 18px;">

当前版本的代码只回答了一个问题。在上面的例子中,Colin Eberhardt 表示必须将两个问题绑定在一句话中。因此,更好的界面应该是对话形式的,能够允许用户在保留上下文的同时提出后续问题(即不要忘记对话中的先前步骤)。

如何用 GPT 实现这一点并不明显,交互是无状态的,您提供提示,模型提供完成。创建一个长时间的对话需要一些非常聪明的提示工程。深入研究 LangChain 后,我发现它使用了一种有趣的技术。

以下提示采用聊天历史记录和后续问题,要求 GPT 将问题改写为独立问题:

以下代码使用之前的函数,将其包装在允许持续对话的进一步循环中。每次迭代时,聊天记录都会附加到“日志”中,并根据上述提示来确保每个后续问题都可以作为独立问题工作。

如何将这个合并过程应用于之前的例子中?用户首先问“昨日旧金山的最高温度是多少华氏度?”然后问“是多少摄氏度?”。

当被问及第一个问题时,LLM 编排器搜索了谷歌并回答“昨天,旧金山的高温为 54°F ”。这就是聊天记录的合并方式,以使后续问题成为独立问题:

通过上述提示,GPT 回答了“ 54°F 是多少摄氏度?”,这正是 Colin Eberhardt 想要的——对原始问题的修改,以包含聊天历史记录中的重要上下文。综上所述,对话的流程如下:

现在有了一个由 LLM 编排的对话界面,它使用其推理功能来适当地使用工具,所有这些都只需 100 行代码。

"],[20,"\n","24:\"QXvs\"|36:177|direction:\"ltr\""],[20," await fetch(\"https://api.openai.com/v1/completions\", {"],[20,"\n","24:\"AjoF\"|36:177|direction:\"ltr\""],[20," method: \"POST\","],[20,"\n","24:\"sQ2d\"|36:177|direction:\"ltr\""],[20," headers: {"],[20,"\n","24:\"cKOi\"|36:177|direction:\"ltr\""],[20," \"Content-Type\": \"application/json\","],[20,"\n","24:\"PaXk\"|36:177|direction:\"ltr\""],[20," Authorization: \"Bearer \" + process.env.OPENAI_API_KEY,"],[20,"\n","24:\"Gxf0\"|36:177|direction:\"ltr\""],[20," },"],[20,"\n","24:\"ywEP\"|36:177|direction:\"ltr\""],[20," body: JSON.stringify({"],[20,"\n","24:\"Y8JO\"|36:177|direction:\"ltr\""],[20," model: \"text-davinci-003\","],[20,"\n","24:\"29CS\"|36:177|direction:\"ltr\""],[20," prompt,"],[20,"\n","24:\"EtaP\"|36:177|direction:\"ltr\""],[20," max_tokens: 256,"],[20,"\n","24:\"nuq1\"|36:177|direction:\"ltr\""],[20," temperature: 0.7,"],[20,"\n","24:\"kdl3\"|36:177|direction:\"ltr\""],[20," stream: false,"],[20,"\n","24:\"ixei\"|36:177|direction:\"ltr\""],[20," }),"],[20,"\n","24:\"NLO7\"|36:177|direction:\"ltr\""],[20," })"],[20,"\n","24:\"FK9K\"|36:177|direction:\"ltr\""],[20," .then((res) => res.json());"],[20,"\n","24:\"19X3\"|36:177|direction:\"ltr\""],[20," .then((res) => res.choices[0].text);"],[20,"\n","24:\"jWqM\"|36:177|direction:\"ltr\""],[20,"\n","24:\"tNAw\"|36:177|direction:\"ltr\""],[20,"const response = await completePrompt(promptWithQuestion);"],[20,"\n","24:\"C1s1\"|36:177|direction:\"ltr\""],[20,"console.log(response.choices[0].text);"],[20,"\n","24:\"UJ4C\"|36:177|direction:\"ltr\""],[20,"得到的结果如下:","27:\"12\""],[20,"\n","24:\"yZpl\"|direction:\"ltr\"|linespacing:\"150\""],[20,"Question: What was the high temperature in SF yesterday in Fahrenheit?"],[20,"\n","24:\"xxua\"|36:177|direction:\"ltr\""],[20,"Thought: I can try searching the answer"],[20,"\n","24:\"LL2H\"|36:177|direction:\"ltr\""],[20,"Action: search"],[20,"\n","24:\"PyUa\"|36:177|direction:\"ltr\""],[20,"Action Input: \"high temperature san francisco yesterday fahrenheit\""],[20,"\n","24:\"iwEZ\"|36:177|direction:\"ltr\""],[20,"Observation: Found an article from the San Francisco Chronicle forecasting"],[20,"\n","24:\"itB3\"|36:177|direction:\"ltr\""],[20," a high of 69 degrees"],[20,"\n","24:\"RSBu\"|36:177|direction:\"ltr\""],[20,"Thought: I can use this to determine the answer"],[20,"\n","24:\"MdUM\"|36:177|direction:\"ltr\""],[20,"Final Answer: The high temperature in SF yesterday was 69 degrees Fahrenheit."],[20,"\n","24:\"oD1H\"|36:177|direction:\"ltr\""],[20,"可以看到 GPT 已经确定了执行步骤,即应该执行搜索,使用“昨日旧金山高温华氏度”这个术语。但有意思的是,它已经提前预测出了搜索结果,给出了 69 °F 的答案。","27:\"12\""],[20,"\n","24:\"hnYu\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"令人印象深刻的是,仅仅通过简单的提示,GPT 就已经“推理”了回答这个问题的最佳方法。如果你只是直接问GPT :“昨天旧金山高温是多少?”,它会回答:”对我来说,昨天( 2019 年 8 月 28 日)旧金山的高温是 76 °F。显然,这不是昨天,但该日期报告的温度却是正确的!","27:\"12\""],[20,"\n","24:\"fvuS\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"因此,为了防止 GPT 想象整个对话,我们只需要指定一个停止序列即可。","27:\"12\""],[20," "],[20,"\n","24:\"NoFM\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"\n","24:\"p47a\"|direction:\"ltr\"|linespacing:\"150\""],[20,"搜索工具"],[20,"\n","24:\"a8hE\"|32:1|direction:\"ltr\"|linespacing:\"115\""],[20,"\n","24:\"pBVr\"|direction:\"ltr\""],[20,"在正确的位置完成停止后,现在需要创建第一个“工具”,它执行 Google 搜索。Colin Eberhardt 将使用 SerpApi 来爬取 Google,并以简单的 SON 格式提供响应。","27:\"12\""],[20,"\n","24:\"MZ0n\"|7:3|direction:\"ltr\""],[20,"下面对工具进行定义,命名为:search","27:\"12\""],[20,"\n","24:\"zQLe\"|7:3|direction:\"ltr\""],[20,"const googleSearch = async (question) =>"],[20,"\n","24:\"FFBj\"|36:177|direction:\"ltr\""],[20," await fetch("],[20,"\n","24:\"hNxl\"|36:177|direction:\"ltr\""],[20," " https:="$&q=$" quot="quot" then=">" res="res" tools="{"],[20,"\n","24:\"Z5C8\"|36:177|direction:\"ltr\""],[20,"" search:="search:" description:="description:" a="a" search="search" engine="engine" useful="useful" for="for" when="when" you="you" need="need" to="to" answer="answer" questions="questions" about="about" current="current" events="events" input="input" should="should" be="be" query="query" execute:="execute:" googlesearch="googlesearch" serpapi="serpapi" prompt="promptTemplate"],[20,"\n","24:\"g2Av\"|36:177|direction:\"ltr\""],[20,"" replace="replace" question="question" object="object" map=">" toolname="toolname" join="join" eberhardt="eberhardt" llm="llm" answerquestion="async" let="let" see="see" above="above" allow="allow" the="the" iterate="iterate" until="until" it="it" finds="finds" final="final" while="while" true="true" const="const" response="await" completeprompt="completeprompt" add="add" this="this" action="#" if="if" execute="execute" specified="specified" by="by" llms="llms" actioninput="response.match(/Action" input:="input:" result="await" observation:="observation:" else="else" return="return" answer:="answer:" was="was" temperature="temperature" in="in" newcastle="newcastle" england="england" yesterday="yesterday" colin="colin" f="f" what="what" requires="requires" looking="looking" up="up" information="information" weather="weather" maximum="maximum" yesterday:="yesterday:" at="at" pm="pm" minimum="minimum" average="average" and="and" parser="parser" from="from" calculator:="calculator:" getting="getting" of="of" math="math" expression="expression" tool="tool" valid="valid" mathematical="mathematical" that="that" could="could" executed="executed" simple="simple" calculator="calculator" is="is" square="square" root="root" i="i" use="use" now="now" know="know" c="c" high="high" sf="sf" fahrenheit="fahrenheit" same="same" value="value" celsius="celsius" find="find" san="san" francisco="francisco" history="history" previous="previous" hours="hours" convert="convert" or="or" google="google" gpt="gpt" langchain="langchain" following="following" conversation="conversation" follow="follow" rephrase="rephrase" standalone="standalone" history:="history:" question:="question:" mergetemplate="fs.readFileSync(\"merge.txt\"," merge="merge" chat="chat" with="with" new="new" mergehistory="async" await="await" main="main" loop="loop" user="user" rl="rl" can="can" help="help" console="console" q:="q:" equal="equal" world="world" record="record" solving="solving" rubiks="rubiks" cube="cube" rubik="rubik" seconds="seconds" held="held" yiheng="yiheng" china="china" robot="robot" solve="solve" faster="faster" fastest="fastest" time="time" has="has" solved="solved" who="who" made="made" created="created" would="would" an="an" human="human" expect="expect" takes="takes" person="person" three="three" research="research" confirm="confirm" confirmed="confirmed" which="which" set="set" engineer="engineer" albert="albert" beer="beer" his="his" sub1="sub1" reloaded="reloaded" researchers="researchers" realised="realised" they="they" more="more" quickly="quickly" using="using" different="different" type="type" motor="motor" their="their" best="best" mcu="mcu" film="film" critics="critics" avengers:="avengers:" endgame="endgame" plot="plot" outline="outline" thanos="thanos" decimates="decimates" planet="planet" universe="universe" remaining="remaining" avengers="avengers" must="must" out="out" way="way" bring="bring" back="back" vanquished="vanquished" allies="allies" epic="epic" showdown="showdown" die="die" stark="stark" black="black" widow="widow" vision="vision" died="died" avenger="avenger" not="not" so="so" your="your" last="last" wrong="wrong" joel="joel" spolsky="spolsky" langchain-mini="langchain-mini" data-copy-origin="https://shimo.im" style="font-size: 18px;">一些好的例子"],[20,"\n","24:\"QXvs\"|36:177|direction:\"ltr\""],[20," await fetch(\"https://api.openai.com/v1/completions\", {"],[20,"\n","24:\"AjoF\"|36:177|direction:\"ltr\""],[20," method: \"POST\","],[20,"\n","24:\"sQ2d\"|36:177|direction:\"ltr\""],[20," headers: {"],[20,"\n","24:\"cKOi\"|36:177|direction:\"ltr\""],[20," \"Content-Type\": \"application/json\","],[20,"\n","24:\"PaXk\"|36:177|direction:\"ltr\""],[20," Authorization: \"Bearer \" + process.env.OPENAI_API_KEY,"],[20,"\n","24:\"Gxf0\"|36:177|direction:\"ltr\""],[20," },"],[20,"\n","24:\"ywEP\"|36:177|direction:\"ltr\""],[20," body: JSON.stringify({"],[20,"\n","24:\"Y8JO\"|36:177|direction:\"ltr\""],[20," model: \"text-davinci-003\","],[20,"\n","24:\"29CS\"|36:177|direction:\"ltr\""],[20," prompt,"],[20,"\n","24:\"EtaP\"|36:177|direction:\"ltr\""],[20," max_tokens: 256,"],[20,"\n","24:\"nuq1\"|36:177|direction:\"ltr\""],[20," temperature: 0.7,"],[20,"\n","24:\"kdl3\"|36:177|direction:\"ltr\""],[20," stream: false,"],[20,"\n","24:\"ixei\"|36:177|direction:\"ltr\""],[20," }),"],[20,"\n","24:\"NLO7\"|36:177|direction:\"ltr\""],[20," })"],[20,"\n","24:\"FK9K\"|36:177|direction:\"ltr\""],[20," .then((res) => res.json());"],[20,"\n","24:\"19X3\"|36:177|direction:\"ltr\""],[20," .then((res) => res.choices[0].text);"],[20,"\n","24:\"jWqM\"|36:177|direction:\"ltr\""],[20,"\n","24:\"tNAw\"|36:177|direction:\"ltr\""],[20,"const response = await completePrompt(promptWithQuestion);"],[20,"\n","24:\"C1s1\"|36:177|direction:\"ltr\""],[20,"console.log(response.choices[0].text);"],[20,"\n","24:\"UJ4C\"|36:177|direction:\"ltr\""],[20,"得到的结果如下:","27:\"12\""],[20,"\n","24:\"yZpl\"|direction:\"ltr\"|linespacing:\"150\""],[20,"Question: What was the high temperature in SF yesterday in Fahrenheit?"],[20,"\n","24:\"xxua\"|36:177|direction:\"ltr\""],[20,"Thought: I can try searching the answer"],[20,"\n","24:\"LL2H\"|36:177|direction:\"ltr\""],[20,"Action: search"],[20,"\n","24:\"PyUa\"|36:177|direction:\"ltr\""],[20,"Action Input: \"high temperature san francisco yesterday fahrenheit\""],[20,"\n","24:\"iwEZ\"|36:177|direction:\"ltr\""],[20,"Observation: Found an article from the San Francisco Chronicle forecasting"],[20,"\n","24:\"itB3\"|36:177|direction:\"ltr\""],[20," a high of 69 degrees"],[20,"\n","24:\"RSBu\"|36:177|direction:\"ltr\""],[20,"Thought: I can use this to determine the answer"],[20,"\n","24:\"MdUM\"|36:177|direction:\"ltr\""],[20,"Final Answer: The high temperature in SF yesterday was 69 degrees Fahrenheit."],[20,"\n","24:\"oD1H\"|36:177|direction:\"ltr\""],[20,"可以看到 GPT 已经确定了执行步骤,即应该执行搜索,使用“昨日旧金山高温华氏度”这个术语。但有意思的是,它已经提前预测出了搜索结果,给出了 69 °F 的答案。","27:\"12\""],[20,"\n","24:\"hnYu\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"令人印象深刻的是,仅仅通过简单的提示,GPT 就已经“推理”了回答这个问题的最佳方法。如果你只是直接问GPT :“昨天旧金山高温是多少?”,它会回答:”对我来说,昨天( 2019 年 8 月 28 日)旧金山的高温是 76 °F。显然,这不是昨天,但该日期报告的温度却是正确的!","27:\"12\""],[20,"\n","24:\"fvuS\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"因此,为了防止 GPT 想象整个对话,我们只需要指定一个停止序列即可。","27:\"12\""],[20," "],[20,"\n","24:\"NoFM\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"\n","24:\"p47a\"|direction:\"ltr\"|linespacing:\"150\""],[20,"搜索工具"],[20,"\n","24:\"a8hE\"|32:1|direction:\"ltr\"|linespacing:\"115\""],[20,"\n","24:\"pBVr\"|direction:\"ltr\""],[20,"在正确的位置完成停止后,现在需要创建第一个“工具”,它执行 Google 搜索。Colin Eberhardt 将使用 SerpApi 来爬取 Google,并以简单的 SON 格式提供响应。","27:\"12\""],[20,"\n","24:\"MZ0n\"|7:3|direction:\"ltr\""],[20,"下面对工具进行定义,命名为:search","27:\"12\""],[20,"\n","24:\"zQLe\"|7:3|direction:\"ltr\""],[20,"const googleSearch = async (question) =>"],[20,"\n","24:\"FFBj\"|36:177|direction:\"ltr\""],[20," await fetch("],[20,"\n","24:\"hNxl\"|36:177|direction:\"ltr\""],[20," " https:="$&q=$" quot="quot" then=">" res="res" tools="{"],[20,"\n","24:\"Z5C8\"|36:177|direction:\"ltr\""],[20,"" search:="search:" description:="description:" a="a" search="search" engine="engine" useful="useful" for="for" when="when" you="you" need="need" to="to" answer="answer" questions="questions" about="about" current="current" events="events" input="input" should="should" be="be" query="query" execute:="execute:" googlesearch="googlesearch" serpapi="serpapi" prompt="promptTemplate"],[20,"\n","24:\"g2Av\"|36:177|direction:\"ltr\""],[20,"" replace="replace" question="question" object="object" map=">" toolname="toolname" join="join" eberhardt="eberhardt" llm="llm" answerquestion="async" let="let" see="see" above="above" allow="allow" the="the" iterate="iterate" until="until" it="it" finds="finds" final="final" while="while" true="true" const="const" response="await" completeprompt="completeprompt" add="add" this="this" action="#" if="if" execute="execute" specified="specified" by="by" llms="llms" actioninput="response.match(/Action" input:="input:" result="await" observation:="observation:" else="else" return="return" answer:="answer:" was="was" temperature="temperature" in="in" newcastle="newcastle" england="england" yesterday="yesterday" colin="colin" f="f" what="what" requires="requires" looking="looking" up="up" information="information" weather="weather" maximum="maximum" yesterday:="yesterday:" at="at" pm="pm" minimum="minimum" average="average" and="and" parser="parser" from="from" calculator:="calculator:" getting="getting" of="of" math="math" expression="expression" tool="tool" valid="valid" mathematical="mathematical" that="that" could="could" executed="executed" simple="simple" calculator="calculator" is="is" square="square" root="root" i="i" use="use" now="now" know="know" c="c" high="high" sf="sf" fahrenheit="fahrenheit" same="same" value="value" celsius="celsius" find="find" san="san" francisco="francisco" history="history" previous="previous" hours="hours" convert="convert" or="or" google="google" gpt="gpt" langchain="langchain" following="following" conversation="conversation" follow="follow" rephrase="rephrase" standalone="standalone" history:="history:" question:="question:" mergetemplate="fs.readFileSync(\"merge.txt\"," merge="merge" chat="chat" with="with" new="new" mergehistory="async" await="await" main="main" loop="loop" user="user" rl="rl" can="can" help="help" console="console" q:="q:" equal="equal" world="world" record="record" solving="solving" rubiks="rubiks" cube="cube" rubik="rubik" seconds="seconds" held="held" yiheng="yiheng" china="china" robot="robot" solve="solve" faster="faster" fastest="fastest" time="time" has="has" solved="solved" who="who" made="made" created="created" would="would" an="an" human="human" expect="expect" takes="takes" person="person" three="three" research="research" confirm="confirm" confirmed="confirmed" which="which" set="set" engineer="engineer" albert="albert" beer="beer" his="his" sub1="sub1" reloaded="reloaded" researchers="researchers" realised="realised" they="they" more="more" quickly="quickly" using="using" different="different" type="type" motor="motor" their="their" best="best" mcu="mcu" film="film" critics="critics" avengers:="avengers:" endgame="endgame" plot="plot" outline="outline" thanos="thanos" decimates="decimates" planet="planet" universe="universe" remaining="remaining" avengers="avengers" must="must" out="out" way="way" bring="bring" back="back" vanquished="vanquished" allies="allies" epic="epic" showdown="showdown" die="die" stark="stark" black="black" widow="widow" vision="vision" died="died" avenger="avenger" not="not" so="so" your="your" last="last" wrong="wrong" joel="joel" spolsky="spolsky" langchain-mini="langchain-mini" data-copy-origin="https://shimo.im" style="font-size: 18px;">

以下是一些对话示例:

Colin Eberhardt 表示,深入研究这些问题背后的推理逻辑很有意思。在这个示例中,搜索工具返回结果,但由于某种原因,LLM 决定需要确认答案,使用稍微修改过的查询。

那流行文化呢?以下是 Colin Eberhardt 关于漫威电影的简短聊天:

如你所见,它很快就会开始给出相互矛盾的答案!

"],[20,"\n","24:\"QXvs\"|36:177|direction:\"ltr\""],[20," await fetch(\"https://api.openai.com/v1/completions\", {"],[20,"\n","24:\"AjoF\"|36:177|direction:\"ltr\""],[20," method: \"POST\","],[20,"\n","24:\"sQ2d\"|36:177|direction:\"ltr\""],[20," headers: {"],[20,"\n","24:\"cKOi\"|36:177|direction:\"ltr\""],[20," \"Content-Type\": \"application/json\","],[20,"\n","24:\"PaXk\"|36:177|direction:\"ltr\""],[20," Authorization: \"Bearer \" + process.env.OPENAI_API_KEY,"],[20,"\n","24:\"Gxf0\"|36:177|direction:\"ltr\""],[20," },"],[20,"\n","24:\"ywEP\"|36:177|direction:\"ltr\""],[20," body: JSON.stringify({"],[20,"\n","24:\"Y8JO\"|36:177|direction:\"ltr\""],[20," model: \"text-davinci-003\","],[20,"\n","24:\"29CS\"|36:177|direction:\"ltr\""],[20," prompt,"],[20,"\n","24:\"EtaP\"|36:177|direction:\"ltr\""],[20," max_tokens: 256,"],[20,"\n","24:\"nuq1\"|36:177|direction:\"ltr\""],[20," temperature: 0.7,"],[20,"\n","24:\"kdl3\"|36:177|direction:\"ltr\""],[20," stream: false,"],[20,"\n","24:\"ixei\"|36:177|direction:\"ltr\""],[20," }),"],[20,"\n","24:\"NLO7\"|36:177|direction:\"ltr\""],[20," })"],[20,"\n","24:\"FK9K\"|36:177|direction:\"ltr\""],[20," .then((res) => res.json());"],[20,"\n","24:\"19X3\"|36:177|direction:\"ltr\""],[20," .then((res) => res.choices[0].text);"],[20,"\n","24:\"jWqM\"|36:177|direction:\"ltr\""],[20,"\n","24:\"tNAw\"|36:177|direction:\"ltr\""],[20,"const response = await completePrompt(promptWithQuestion);"],[20,"\n","24:\"C1s1\"|36:177|direction:\"ltr\""],[20,"console.log(response.choices[0].text);"],[20,"\n","24:\"UJ4C\"|36:177|direction:\"ltr\""],[20,"得到的结果如下:","27:\"12\""],[20,"\n","24:\"yZpl\"|direction:\"ltr\"|linespacing:\"150\""],[20,"Question: What was the high temperature in SF yesterday in Fahrenheit?"],[20,"\n","24:\"xxua\"|36:177|direction:\"ltr\""],[20,"Thought: I can try searching the answer"],[20,"\n","24:\"LL2H\"|36:177|direction:\"ltr\""],[20,"Action: search"],[20,"\n","24:\"PyUa\"|36:177|direction:\"ltr\""],[20,"Action Input: \"high temperature san francisco yesterday fahrenheit\""],[20,"\n","24:\"iwEZ\"|36:177|direction:\"ltr\""],[20,"Observation: Found an article from the San Francisco Chronicle forecasting"],[20,"\n","24:\"itB3\"|36:177|direction:\"ltr\""],[20," a high of 69 degrees"],[20,"\n","24:\"RSBu\"|36:177|direction:\"ltr\""],[20,"Thought: I can use this to determine the answer"],[20,"\n","24:\"MdUM\"|36:177|direction:\"ltr\""],[20,"Final Answer: The high temperature in SF yesterday was 69 degrees Fahrenheit."],[20,"\n","24:\"oD1H\"|36:177|direction:\"ltr\""],[20,"可以看到 GPT 已经确定了执行步骤,即应该执行搜索,使用“昨日旧金山高温华氏度”这个术语。但有意思的是,它已经提前预测出了搜索结果,给出了 69 °F 的答案。","27:\"12\""],[20,"\n","24:\"hnYu\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"令人印象深刻的是,仅仅通过简单的提示,GPT 就已经“推理”了回答这个问题的最佳方法。如果你只是直接问GPT :“昨天旧金山高温是多少?”,它会回答:”对我来说,昨天( 2019 年 8 月 28 日)旧金山的高温是 76 °F。显然,这不是昨天,但该日期报告的温度却是正确的!","27:\"12\""],[20,"\n","24:\"fvuS\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"因此,为了防止 GPT 想象整个对话,我们只需要指定一个停止序列即可。","27:\"12\""],[20," "],[20,"\n","24:\"NoFM\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"\n","24:\"p47a\"|direction:\"ltr\"|linespacing:\"150\""],[20,"搜索工具"],[20,"\n","24:\"a8hE\"|32:1|direction:\"ltr\"|linespacing:\"115\""],[20,"\n","24:\"pBVr\"|direction:\"ltr\""],[20,"在正确的位置完成停止后,现在需要创建第一个“工具”,它执行 Google 搜索。Colin Eberhardt 将使用 SerpApi 来爬取 Google,并以简单的 SON 格式提供响应。","27:\"12\""],[20,"\n","24:\"MZ0n\"|7:3|direction:\"ltr\""],[20,"下面对工具进行定义,命名为:search","27:\"12\""],[20,"\n","24:\"zQLe\"|7:3|direction:\"ltr\""],[20,"const googleSearch = async (question) =>"],[20,"\n","24:\"FFBj\"|36:177|direction:\"ltr\""],[20," await fetch("],[20,"\n","24:\"hNxl\"|36:177|direction:\"ltr\""],[20," " https:="$&q=$" quot="quot" then=">" res="res" tools="{"],[20,"\n","24:\"Z5C8\"|36:177|direction:\"ltr\""],[20,"" search:="search:" description:="description:" a="a" search="search" engine="engine" useful="useful" for="for" when="when" you="you" need="need" to="to" answer="answer" questions="questions" about="about" current="current" events="events" input="input" should="should" be="be" query="query" execute:="execute:" googlesearch="googlesearch" serpapi="serpapi" prompt="promptTemplate"],[20,"\n","24:\"g2Av\"|36:177|direction:\"ltr\""],[20,"" replace="replace" question="question" object="object" map=">" toolname="toolname" join="join" eberhardt="eberhardt" llm="llm" answerquestion="async" let="let" see="see" above="above" allow="allow" the="the" iterate="iterate" until="until" it="it" finds="finds" final="final" while="while" true="true" const="const" response="await" completeprompt="completeprompt" add="add" this="this" action="#" if="if" execute="execute" specified="specified" by="by" llms="llms" actioninput="response.match(/Action" input:="input:" result="await" observation:="observation:" else="else" return="return" answer:="answer:" was="was" temperature="temperature" in="in" newcastle="newcastle" england="england" yesterday="yesterday" colin="colin" f="f" what="what" requires="requires" looking="looking" up="up" information="information" weather="weather" maximum="maximum" yesterday:="yesterday:" at="at" pm="pm" minimum="minimum" average="average" and="and" parser="parser" from="from" calculator:="calculator:" getting="getting" of="of" math="math" expression="expression" tool="tool" valid="valid" mathematical="mathematical" that="that" could="could" executed="executed" simple="simple" calculator="calculator" is="is" square="square" root="root" i="i" use="use" now="now" know="know" c="c" high="high" sf="sf" fahrenheit="fahrenheit" same="same" value="value" celsius="celsius" find="find" san="san" francisco="francisco" history="history" previous="previous" hours="hours" convert="convert" or="or" google="google" gpt="gpt" langchain="langchain" following="following" conversation="conversation" follow="follow" rephrase="rephrase" standalone="standalone" history:="history:" question:="question:" mergetemplate="fs.readFileSync(\"merge.txt\"," merge="merge" chat="chat" with="with" new="new" mergehistory="async" await="await" main="main" loop="loop" user="user" rl="rl" can="can" help="help" console="console" q:="q:" equal="equal" world="world" record="record" solving="solving" rubiks="rubiks" cube="cube" rubik="rubik" seconds="seconds" held="held" yiheng="yiheng" china="china" robot="robot" solve="solve" faster="faster" fastest="fastest" time="time" has="has" solved="solved" who="who" made="made" created="created" would="would" an="an" human="human" expect="expect" takes="takes" person="person" three="three" research="research" confirm="confirm" confirmed="confirmed" which="which" set="set" engineer="engineer" albert="albert" beer="beer" his="his" sub1="sub1" reloaded="reloaded" researchers="researchers" realised="realised" they="they" more="more" quickly="quickly" using="using" different="different" type="type" motor="motor" their="their" best="best" mcu="mcu" film="film" critics="critics" avengers:="avengers:" endgame="endgame" plot="plot" outline="outline" thanos="thanos" decimates="decimates" planet="planet" universe="universe" remaining="remaining" avengers="avengers" must="must" out="out" way="way" bring="bring" back="back" vanquished="vanquished" allies="allies" epic="epic" showdown="showdown" die="die" stark="stark" black="black" widow="widow" vision="vision" died="died" avenger="avenger" not="not" so="so" your="your" last="last" wrong="wrong" joel="joel" spolsky="spolsky" langchain-mini="langchain-mini" data-copy-origin="https://shimo.im" style="font-size: 18px;">结论"],[20,"\n","24:\"QXvs\"|36:177|direction:\"ltr\""],[20," await fetch(\"https://api.openai.com/v1/completions\", {"],[20,"\n","24:\"AjoF\"|36:177|direction:\"ltr\""],[20," method: \"POST\","],[20,"\n","24:\"sQ2d\"|36:177|direction:\"ltr\""],[20," headers: {"],[20,"\n","24:\"cKOi\"|36:177|direction:\"ltr\""],[20," \"Content-Type\": \"application/json\","],[20,"\n","24:\"PaXk\"|36:177|direction:\"ltr\""],[20," Authorization: \"Bearer \" + process.env.OPENAI_API_KEY,"],[20,"\n","24:\"Gxf0\"|36:177|direction:\"ltr\""],[20," },"],[20,"\n","24:\"ywEP\"|36:177|direction:\"ltr\""],[20," body: JSON.stringify({"],[20,"\n","24:\"Y8JO\"|36:177|direction:\"ltr\""],[20," model: \"text-davinci-003\","],[20,"\n","24:\"29CS\"|36:177|direction:\"ltr\""],[20," prompt,"],[20,"\n","24:\"EtaP\"|36:177|direction:\"ltr\""],[20," max_tokens: 256,"],[20,"\n","24:\"nuq1\"|36:177|direction:\"ltr\""],[20," temperature: 0.7,"],[20,"\n","24:\"kdl3\"|36:177|direction:\"ltr\""],[20," stream: false,"],[20,"\n","24:\"ixei\"|36:177|direction:\"ltr\""],[20," }),"],[20,"\n","24:\"NLO7\"|36:177|direction:\"ltr\""],[20," })"],[20,"\n","24:\"FK9K\"|36:177|direction:\"ltr\""],[20," .then((res) => res.json());"],[20,"\n","24:\"19X3\"|36:177|direction:\"ltr\""],[20," .then((res) => res.choices[0].text);"],[20,"\n","24:\"jWqM\"|36:177|direction:\"ltr\""],[20,"\n","24:\"tNAw\"|36:177|direction:\"ltr\""],[20,"const response = await completePrompt(promptWithQuestion);"],[20,"\n","24:\"C1s1\"|36:177|direction:\"ltr\""],[20,"console.log(response.choices[0].text);"],[20,"\n","24:\"UJ4C\"|36:177|direction:\"ltr\""],[20,"得到的结果如下:","27:\"12\""],[20,"\n","24:\"yZpl\"|direction:\"ltr\"|linespacing:\"150\""],[20,"Question: What was the high temperature in SF yesterday in Fahrenheit?"],[20,"\n","24:\"xxua\"|36:177|direction:\"ltr\""],[20,"Thought: I can try searching the answer"],[20,"\n","24:\"LL2H\"|36:177|direction:\"ltr\""],[20,"Action: search"],[20,"\n","24:\"PyUa\"|36:177|direction:\"ltr\""],[20,"Action Input: \"high temperature san francisco yesterday fahrenheit\""],[20,"\n","24:\"iwEZ\"|36:177|direction:\"ltr\""],[20,"Observation: Found an article from the San Francisco Chronicle forecasting"],[20,"\n","24:\"itB3\"|36:177|direction:\"ltr\""],[20," a high of 69 degrees"],[20,"\n","24:\"RSBu\"|36:177|direction:\"ltr\""],[20,"Thought: I can use this to determine the answer"],[20,"\n","24:\"MdUM\"|36:177|direction:\"ltr\""],[20,"Final Answer: The high temperature in SF yesterday was 69 degrees Fahrenheit."],[20,"\n","24:\"oD1H\"|36:177|direction:\"ltr\""],[20,"可以看到 GPT 已经确定了执行步骤,即应该执行搜索,使用“昨日旧金山高温华氏度”这个术语。但有意思的是,它已经提前预测出了搜索结果,给出了 69 °F 的答案。","27:\"12\""],[20,"\n","24:\"hnYu\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"令人印象深刻的是,仅仅通过简单的提示,GPT 就已经“推理”了回答这个问题的最佳方法。如果你只是直接问GPT :“昨天旧金山高温是多少?”,它会回答:”对我来说,昨天( 2019 年 8 月 28 日)旧金山的高温是 76 °F。显然,这不是昨天,但该日期报告的温度却是正确的!","27:\"12\""],[20,"\n","24:\"fvuS\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"因此,为了防止 GPT 想象整个对话,我们只需要指定一个停止序列即可。","27:\"12\""],[20," "],[20,"\n","24:\"NoFM\"|7:3|direction:\"ltr\"|linespacing:\"150\""],[20,"\n","24:\"p47a\"|direction:\"ltr\"|linespacing:\"150\""],[20,"搜索工具"],[20,"\n","24:\"a8hE\"|32:1|direction:\"ltr\"|linespacing:\"115\""],[20,"\n","24:\"pBVr\"|direction:\"ltr\""],[20,"在正确的位置完成停止后,现在需要创建第一个“工具”,它执行 Google 搜索。Colin Eberhardt 将使用 SerpApi 来爬取 Google,并以简单的 SON 格式提供响应。","27:\"12\""],[20,"\n","24:\"MZ0n\"|7:3|direction:\"ltr\""],[20,"下面对工具进行定义,命名为:search","27:\"12\""],[20,"\n","24:\"zQLe\"|7:3|direction:\"ltr\""],[20,"const googleSearch = async (question) =>"],[20,"\n","24:\"FFBj\"|36:177|direction:\"ltr\""],[20," await fetch("],[20,"\n","24:\"hNxl\"|36:177|direction:\"ltr\""],[20," " https:="$&q=$" quot="quot" then=">" res="res" tools="{"],[20,"\n","24:\"Z5C8\"|36:177|direction:\"ltr\""],[20,"" search:="search:" description:="description:" a="a" search="search" engine="engine" useful="useful" for="for" when="when" you="you" need="need" to="to" answer="answer" questions="questions" about="about" current="current" events="events" input="input" should="should" be="be" query="query" execute:="execute:" googlesearch="googlesearch" serpapi="serpapi" prompt="promptTemplate"],[20,"\n","24:\"g2Av\"|36:177|direction:\"ltr\""],[20,"" replace="replace" question="question" object="object" map=">" toolname="toolname" join="join" eberhardt="eberhardt" llm="llm" answerquestion="async" let="let" see="see" above="above" allow="allow" the="the" iterate="iterate" until="until" it="it" finds="finds" final="final" while="while" true="true" const="const" response="await" completeprompt="completeprompt" add="add" this="this" action="#" if="if" execute="execute" specified="specified" by="by" llms="llms" actioninput="response.match(/Action" input:="input:" result="await" observation:="observation:" else="else" return="return" answer:="answer:" was="was" temperature="temperature" in="in" newcastle="newcastle" england="england" yesterday="yesterday" colin="colin" f="f" what="what" requires="requires" looking="looking" up="up" information="information" weather="weather" maximum="maximum" yesterday:="yesterday:" at="at" pm="pm" minimum="minimum" average="average" and="and" parser="parser" from="from" calculator:="calculator:" getting="getting" of="of" math="math" expression="expression" tool="tool" valid="valid" mathematical="mathematical" that="that" could="could" executed="executed" simple="simple" calculator="calculator" is="is" square="square" root="root" i="i" use="use" now="now" know="know" c="c" high="high" sf="sf" fahrenheit="fahrenheit" same="same" value="value" celsius="celsius" find="find" san="san" francisco="francisco" history="history" previous="previous" hours="hours" convert="convert" or="or" google="google" gpt="gpt" langchain="langchain" following="following" conversation="conversation" follow="follow" rephrase="rephrase" standalone="standalone" history:="history:" question:="question:" mergetemplate="fs.readFileSync(\"merge.txt\"," merge="merge" chat="chat" with="with" new="new" mergehistory="async" await="await" main="main" loop="loop" user="user" rl="rl" can="can" help="help" console="console" q:="q:" equal="equal" world="world" record="record" solving="solving" rubiks="rubiks" cube="cube" rubik="rubik" seconds="seconds" held="held" yiheng="yiheng" china="china" robot="robot" solve="solve" faster="faster" fastest="fastest" time="time" has="has" solved="solved" who="who" made="made" created="created" would="would" an="an" human="human" expect="expect" takes="takes" person="person" three="three" research="research" confirm="confirm" confirmed="confirmed" which="which" set="set" engineer="engineer" albert="albert" beer="beer" his="his" sub1="sub1" reloaded="reloaded" researchers="researchers" realised="realised" they="they" more="more" quickly="quickly" using="using" different="different" type="type" motor="motor" their="their" best="best" mcu="mcu" film="film" critics="critics" avengers:="avengers:" endgame="endgame" plot="plot" outline="outline" thanos="thanos" decimates="decimates" planet="planet" universe="universe" remaining="remaining" avengers="avengers" must="must" out="out" way="way" bring="bring" back="back" vanquished="vanquished" allies="allies" epic="epic" showdown="showdown" die="die" stark="stark" black="black" widow="widow" vision="vision" died="died" avenger="avenger" not="not" so="so" your="your" last="last" wrong="wrong" joel="joel" spolsky="spolsky" langchain-mini="langchain-mini" data-copy-origin="https://shimo.im" style="font-size: 18px;">

Colin Eberhardt 表示,他十分享受这个过程,并学到了很多关于将调用 LLM 来链的整体概念。并且他认为操作起来十分简单,尤其是核心编排/推理,只需给模型一个例子,它就可以运行。

但是,Colin Eberhardt 也意识到了它在当下存在的弱点。因为 Colin Eberhardt 提供的示例都是正常情况,当遇到特殊情况时,它就不能 100% 的工作,因此不得不经常调整问题才能达到所需的结果。

LangChain 也是如此。了解背后的工作原理,能够有助于解决特殊问题。例如,有时 LLM 编排器只是决定它不需要使用计算器,可以自己执行给定的计算。

Colin Eberhardt 表示,希望每一个使用这个工具的人都能了解其背后的原理。它是对精心设计提示的抽象,但这些提示并不完美。用美国软件工程师 Joel Spolsky 话来说,这种抽象在某些地方存在一些漏洞。

参考文献:

https://blog.scottlogic.com/2023/05/04/langchain-mini.html


以上就是关于《用 100 行代码揭开 LLM 集成工具 LangChain 的神秘之处!-llvm rust》的全部内容,本文网址:https://www.7ca.cn/baike/55619.shtml,如对您有帮助可以分享给好友,谢谢。
标签:
声明

排行榜