OpenAI #Chat: Answer Text Prompt

OpenAI #Chat: Answer Text Prompt

translate OpenAI #Chat: Textプロンプトに回答

Generates a response to the Text Prompt. By default, it is generated by accessing “gpt-4-1106-preview MODEL” (can be changed to any MODEL). Text prompts for starting a conversation are defined separately for system messages and user messages. Generally, the system message contains the answer rules and the personality of the answering character, and the user message contains the question text. You can also request multiple answers.

Auto Step icon
Configs for this Auto Step
AuthzConfU1
U1: Select HTTP_Authz Setting (Secret API Key as “Fixed Value”) *
StrConfA1
A1: Set SYSTEM Message Prompt#{EL}
StrConfA2
A2: Set USER Message Prompt#{EL}
StrConfB1
B1: Set Number of Responses (default 1 upto 4)#{EL}
SelectConfC1
C1: Select STRING where Response1 will be stored (update) *
SelectConfC2
C2: Select STRING where Response2 will be stored (update)
SelectConfC3
C3: Select STRING where Response3 will be stored (update)
SelectConfC4
C4: Select STRING where Response4 will be stored (update)
StrConfM
M: Set MODEL Name (default “gpt-4-1106-preview”)#{EL}
StrConfU2
U2: Set OpenAI Organization ID (“org-xxxx”)#{EL}
StrConfU3
U3: Set End-User ID for Monitoring or Detecting (“user123456”)#{EL}
StrConfB2
B2: Set Limit of Response Tokens (default 4095)#{EL}
StrConfB3
B3: Set Stop Sequences for each lines (eg “.”)#{EL}
SelectConfD1
D1: Select NUMERIC where PROMPT Tokens will be stored (update)
SelectConfD2
D2: Select NUM where COMPLETION Tokens will be stored (update)
SelectConfD3
D3: Select NUM where Total Tokens will be stored (update)
SelectConfD4
D4: Select STRING where Finish Reasons will be stored (update)
Script (click to open)
// Script Example of Business Process Automation
// for 'engine type: 3' ("GraalJS standard mode")
// cf. 'engine type: 2' ("GraalJS Nashorn compatible mode") (renamed from "GraalJS" at 20230526)

//////// START "main()" /////////////////////////////////////////////////////////////////

main();
function main(){ 

////// == Config Retrieving / 工程コンフィグの参照 ==
const strAuthzSetting           = configs.get      ( "AuthzConfU1" );  /// REQUIRED
  engine.log( " AutomatedTask Config: Authz Setting: " + strAuthzSetting );
const strOrgId                  = configs.get      ( "StrConfU2" );    // NotRequired
  engine.log( " AutomatedTask Config: OpenAI-Organization: " + strOrgId );
const strEndUserId              = configs.get      ( "StrConfU3" ) !== "" ?  // NotRequired
                                  configs.get      ( "StrConfU3" ) :
                                  "m" + processInstance.getProcessModelInfoId().toString(); // (default)
  engine.log( " AutomatedTask Config: End User IDs: " + strEndUserId );

const strModel                  = configs.get      ( "StrConfM" ) !== "" ?   // NotRequired
                                  configs.get      ( "StrConfM" ) : "gpt-4-1106-preview"; // (default)
  engine.log( " AutomatedTask Config: OpenAI Model: " + strModel );

const strSystemMsg              = configs.get      ( "StrConfA1" );    // NotRequired
const strUserMsg                = configs.get      ( "StrConfA2" );    // NotRequired
  if( strUserMsg === "" && strSystemMsg    === ""){
    throw new Error( "\n AutomatedTask ConfigError:" +
                     " Config {A1:SystemMsg} or {A2:UserMsg} required \n" );
  }
const strChoises                = configs.get      ( "StrConfB1" );    // NotRequired
const numChoises                = isNaN(parseInt(strChoises,10)) ?
                                  1 : parseInt(strChoises,10);
const strLimit                  = configs.get      ( "StrConfB2" );    // NotRequired
const numLimit                  = isNaN(parseInt(strLimit,10)) ?
                                  4095 : parseInt(strLimit,10);
const strStops                  = configs.get      ( "StrConfB3" );    // NotRequired
const arrStops                  = strStops !== "" ?
                                  strStops.split("\n") : null;

const strPocketResponse1        = configs.getObject( "SelectConfC1" ); /// REQUIRED
const strPocketResponse2        = configs.getObject( "SelectConfC2" ); // NotRequired
const strPocketResponse3        = configs.getObject( "SelectConfC3" ); // NotRequired
const strPocketResponse4        = configs.getObject( "SelectConfC4" ); // NotRequired
const numPocketPromptTokens     = configs.getObject( "SelectConfD1" ); // NotRequired
const numPocketCompletionTokens = configs.getObject( "SelectConfD2" ); // NotRequired
const numPocketTotalTokens      = configs.getObject( "SelectConfD3" ); // NotRequired
const strPocketFinishReasons    = configs.getObject( "SelectConfD4" ); // NotRequired



////// == Data Retrieving / ワークフローデータの参照 ==
// (Nothing. Retrieved via Expression Language in Config Retrieving)


////// == Calculating / 演算 ==

//// OpenAI API > Documentation > API REFERENCE > CHAT
//// https://platform.openai.com/docs/api-reference/chat

/// prepare json
let strJson = {};
    strJson.model = strModel;
    strJson.user  = strEndUserId;
    //  strJson.response_format = {};
    //  strJson.response_format.type = "json_object"; // valid JSON mode
    //  To Make response_content JSON ('json' required in Request Msg)
    strJson.n          = numChoises;
    strJson.max_tokens = numLimit;
    if ( arrStops !== null ){
      strJson.stop = [];
      // Up to 4 sequences where the API will stop generating further tokens.
      const numMaxSeq = 4;
      for ( let i = 0; i < arrStops.length && i < numMaxSeq; i++ ){
        if ( arrStops[i] === "- - -" ){
          strJson.stop.push ( "\n" );
        }else{
          strJson.stop.push ( arrStops[i] );
        }
      }
    }
    strJson.messages = [];
    if ( strSystemMsg !=="" ) {
      let objSystemMsg = {};
          objSystemMsg.role = "system";
          objSystemMsg.content = strSystemMsg;
      strJson.messages.push ( objSystemMsg );
    }
    if ( strUserMsg !=="" ) {
      let objUserMsg = {};
          objUserMsg.role = "user";
          objUserMsg.content = strUserMsg;
      strJson.messages.push ( objUserMsg );
    }

/// prepare request1
let request1Uri = "https://api.openai.com/v1/chat/completions";
let request1 = httpClient.begin(); // HttpRequestWrapper
    request1 = request1.authSetting( strAuthzSetting ); // with "Authorization: Bearer XX"
    if ( strOrgId !== "" ){
      request1 = request1.header( "OpenAI-Organization", strOrgId );
    }
    request1 = request1.body( JSON.stringify( strJson ), "application/json" );

/// try request1
const response1     = request1.post( request1Uri ); // HttpResponseWrapper
engine.log( " AutomatedTask ApiRequest1 Start: " + request1Uri );
const response1Code = response1.getStatusCode() + ""; // JavaNum to string
const response1Body = response1.getResponseAsString();
engine.log( " AutomatedTask ApiResponse1 Status: " + response1Code );
if( response1Code !== "200"){
  throw new Error( "\n AutomatedTask UnexpectedResponseError: " +
                    response1Code + "\n" + response1Body + "\n" );
}


/// parse response1
/* engine.log( response1Body ); // debug
{
  "id": "chatcmpl-8JF1v1NheMIZfeX2AcBjKunKUMN8p",
  "object": "chat.completion",
  "created": 1699596699,
  "model": "gpt-4-1106-preview",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "お客様、この話題につ ... ... "
      },
      "finish_reason": "stop"
    },
    {
      "index": 1,
      "message": {
        "role": "assistant",
        "content": "この度の新聞記事に関し ... ... "
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 1678,
    "completion_tokens": 1401,
    "total_tokens": 3079
  },
  "system_fingerprint": "fp_a24b4d720c"
}
*/
const response1Obj = JSON.parse( response1Body );

let arrFinishReasons = [];
for ( let i = 0; i < response1Obj.choices.length; i++ ){
  arrFinishReasons.push ( response1Obj.choices[i].finish_reason );
}


////// == Data Updating / ワークフローデータへの代入 ==

if( strPocketResponse1 !== null ){
  engine.setData( strPocketResponse1,
                  response1Obj.choices[0]?.message.content ?? ""
                );  // optional chaining - nullish coalescing
}
if( strPocketResponse2 !== null ){
  engine.setData( strPocketResponse2, response1Obj.choices[1]?.message.content ?? "" );
}
if( strPocketResponse3 !== null ){
  engine.setData( strPocketResponse3, response1Obj.choices[2]?.message.content ?? "" );
}
if( strPocketResponse4 !== null ){
  engine.setData( strPocketResponse4, response1Obj.choices[3]?.message.content ?? "" );
}

if( numPocketPromptTokens !== null ){
  engine.setData( numPocketPromptTokens, new java.math.BigDecimal(
                  response1Obj.usage.prompt_tokens ?? 0
                ));
}
if( numPocketCompletionTokens !== null ){
  engine.setData( numPocketCompletionTokens, new java.math.BigDecimal(
                  response1Obj.usage.completion_tokens ?? 0
                ));
}
if( numPocketTotalTokens !== null ){
  engine.setData( numPocketTotalTokens, new java.math.BigDecimal(
                  response1Obj.usage.total_tokens ?? 0
                ));
}
// "??": Nullish coalescing operator (ES11)
// https://developer.mozilla.org/docs/Web/JavaScript/Reference/Operators/Nullish_coalescing

if( strPocketFinishReasons !== null ){
  engine.setData( strPocketFinishReasons, arrFinishReasons.join('\n') );
}

} //////// END "main()" /////////////////////////////////////////////////////////////////


/*
Notes:
- This [Automated Step] obtains the Text Response via OpenAI API (Chat endpoint).
    - Up to 4 Text can be genarated (default: 1)
- If place this [Automated Atep] in the workflow diagram, communication will occur every time a process arrives.
    - Request from the Questetra BPM Suite server to the OpenAI server.
    - Analyzes the response from the OpenAI server and stores the necessary information.
- [HTTP Authz Settings] is required for workflow apps that include this [Automated Step].
    - An API key is required to use OpenAI API. Please obtain an API key in advance.
        - https://platform.openai.com/api-keys
    - Set 'Secret API Key' as communication token. [HTTP Authz Settings] > [Token Fixed Value]
- Model endpoint compatibility (as of Nov 2023)
    - `gpt-4` (dated model releases)
    - `gpt-4-1106-preview`
    - `gpt-4-32k` (dated model releases)
    - `gpt-3.5-turbo` (dated model releases)
    - `gpt-3.5-turbo-16k` (dated model releases)
    - fine-tuned versions of `gpt-3.5-turbo`
    - see: https://platform.openai.com/docs/models/model-endpoint-compatibility (/v1/chat/completions)
- GPT-4 Turbo with 128K context
    - `gpt-4-1106-preview`: a preview of the next generation of GPT-4 (GPT-4 Turbo)
        - the first version of GPT-4 in March 2023 (available to all developers in July 2023)
    - has knowledge of world events up to April 2023.
        - CEO Altman, "We will try to never let it get that out of date again." 
    - has a 128k context window so it can fit the equivalent of more than 300 pages of text in a single prompt.
    - at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4.

APPENDIX
- Large amount of allowable response sentences to be generated may exceed the system limit.
    - Generation will be aborted if the allowable amount is reached.
        - `"finish_reason":"length",`
        - If a large number is set for "Number of Responses," the response is more likely to be interrupted.
    - For English, one word or symbol often counts as one token.
        - For average English, it is about 1 token for 4 characters.
        - In Japanese, a single character may be divided into multiple tokens.
        - In the case of average Japanese, a single character is about one token.
    - You can check the approximate number of word tokens at tokenizer.
        - https://platform.openai.com/tokenizer
- Stop Sequences can be set up to four (invalid after the fifth line)
    - If set `\t` or other characters, they will be escaped.
    - To set a newline code (`\n`), set `- - -`. (experimental)
        - The response will always be a single line.
- Note that the workflow app ID is automatically assigned to the `user` parameter of the OpenAI API. (experimental)
    - `processInstance.getProcessModelInfoId()`
- In settings that refer to numeric type data, be careful not to mix in digit separators.
    - The formatting function `#sformat` is useful. (Java String.format)
    - e.g. `#{#sformat("%1.1f", #q_numeric)}` (rounded to one decimal place)
    - "R2272: Output of Strings via EL"
        - https://questetra.zendesk.com/hc/en-us/articles/360024292872-R2272-Output-of-Strings-via-EL-syntax
- If the number of responses is set to more than one, "Finish Reasons" will be in multiple lines.
     - If multiple lines are expected, set the data item to be stored as a multiline string.

Notes-ja:
- この[自動工程]は、OpenAI API (Chat エンドポイント)を通じて、回答文章を取得します。
    - この[自動工程]で生成できる文章は最大4つです。(デフォルト:1)
- この[自動工程]をワークフロー図に配置すれば、案件到達の度に通信が発生します。
    - Questetra BPM Suite サーバから OpenAI サーバに対してリクエストします。
    - OpenAI サーバからのレスポンスを解析し、必要情報を格納します。
- この[自動工程]を含むワークフローアプリには、[HTTP 認証設定]が必要です。
    - OpenAI API の利用には API key が必要です。あらかじめ API Key を取得しておいてください。
        - https://platform.openai.com/api-keys
    - 'Secret API Key' を通信トークンとしてセットします。[HTTP 認証設定]>[トークン直接指定]
- 対応 MODEL (2023年11月現在)
    - `gpt-4` (dated model releases)
    - `gpt-4-1106-preview`
    - `gpt-4-32k` (dated model releases)
    - `gpt-3.5-turbo` (dated model releases)
    - `gpt-3.5-turbo-16k` (dated model releases)
    - fine-tuned versions of `gpt-3.5-turbo`
    - see: https://platform.openai.com/docs/models/model-endpoint-compatibility (/v1/chat/completions)
- 128K コンテキストを備えた GPT-4 Turbo とは?
    - `gpt-4-1106-preview`: 次世代 GPT-4 (GPT-4 Turbo) のプレビュー版
        - GPT-4 の初版は 2023年3月リリース (2023年7月に全開発者が利用可能)
    - 2023年4月までの出来事に知識がある。
        - アルトマンCEO、「2度と時代遅れにならないようにする」
    - 128k コンテキストで、300ページを超えるテキストに相当する情報を1プロンプトに収納可。
    - GPT-4 と比較して、入力トークン価格が3倍、出力トークンの価格が2倍、安くなった。

APPENDIX-ja
- レスポンス文の生成許容量を大きく設定した場合、システム制限を超える可能性があります。
    - 文章の途中であっても許容量に到達すれば生成が中断されます。
        - `"finish_reason":"length",`
        - "レスポンス数" に大きな数字をセットすると、レスポンス中断の可能性が高くなります。
    - 英語の場合、単語や記号ごとに1トークンになります。
        - 平均的な英語の場合、4文字で1トークン程度になります。
        - 日本語の場合、1文字が複数トークンに分割されることもあります。
        - 平均的な日本語の場合、1文字で1トークン程度になります。
    - ワードトークン数の目安は tokenizer にて確認できます。
        - https://platform.openai.com/tokenizer
- "中断文字列" としてセットできる文節は4つまでです。(5行目以降は無効)
    - "中断文字列" に `\t` 等を設定した場合、エスケープ処理されます。
    - "中断文字列" に改行コード(`\n`)をセットしたい場合、`- - -` をセットします。(試験的)
        - レスポンスは常に一行になります。
- ワークフローアプリIDが、OpenAI API の `user` パラメータに自動的に代入されます。(試験的)
    - `processInstance.getProcessModelInfoId()`
- 数値型データを参照する設定では、桁区切り文字が混入しないように注意してください。
    - フォーマット関数 `#sformat` を使うと便利です。(Java String.format)
    - e.g. `#{#sformat("%1.1f", #q_numeric)}` (小数第一位まで四捨五入)
    - "R2272: 文字列としての出力"
        - https://questetra.zendesk.com/hc/ja/articles/360024292872-R2272
- レスポンス数の設定が複数の場合、"生成終了理由" が複数行になります。
    - 複数行が想定される場合、格納するデータ項目は複数行文字列を設定してください。
*/

Download

warning Freely modifiable JavaScript (ECMAScript) code. No warranty of any kind.
(Installing Addon Auto-Steps are available only on the Professional edition.)

Notes

  • This [Automated Step] obtains the Text Response via OpenAI API (Chat endpoint).
    • Up to 4 sentences can be generated (default: 1)
  • If you place this [Automated Step] in the workflow diagram, communication will occur every time a process arrives.
    • Requests from the Questetra BPM Suite server to the OpenAI server.
    • Analyzes the response from the OpenAI server and stores the necessary information.
  • [HTTP Authz Settings] are required for workflow apps that include this [Automated Step].
    • An API key is required to use OpenAI API. Please obtain an API key in advance.
    • Set ‘Secret API Key’ as the communication token. [HTTP Authz Settings] > [Token Fixed Value]
  • Model endpoint compatibility (as of Nov 2023)
  • GPT-4 Turbo with 128K context
    • gpt-4-1106-preview: a preview of the next generation of GPT-4 (GPT-4 Turbo)
      • The first version of GPT-4 in March 2023 (available to all developers in July 2023)
    • Has knowledge of world events up to April 2023.
      • CEO Altman, “We will try to never let it get that out of date again.”
    • Has a 128k context window so it can fit the equivalent of more than 300 pages of text in a single prompt.
    • Compared to GPT-4, the input token price is 3 times cheaper and the output token price is 2 times cheaper.

Capture

Creates a response for a text-only prompt. By default "gpt-4-1106-preview MODEL", but available any MODEL by config. Text Prompt for starting a conversation is set separately for "system message" (contains rules or personality) and "user message".

Appendix

  • Setting a large response sentences generation capacity may exceed the system limit.
    • Generation will be aborted if the limit is reached.
      • "finish_reason":"length",
      • If a large number is set for “Number of Responses,” the response is more likely to be interrupted.
    • For English, one word or symbol often counts as one token.
      • For average English, it is about 1 token for 4 characters.
      • In Japanese, a single character may be divided into multiple tokens.
      • In the case of average Japanese, a single character is about one token.
    • You can check the approximate number of word tokens at tokenizer.
  • Up to four Stop Sequences can be set (invalid after the fifth line)
    • If you set \t or other characters, they will be escaped.
    • To set a newline code (\n), set - - -. (experimental)
      • The response will always be a single line.
  • Note that the workflow app ID is automatically assigned to the user parameter of the OpenAI API. (experimental)
    • processInstance.getProcessModelInfoId()
  • In settings that refer to numeric type data, be careful not to mix in digit separators.
  • If the number of responses is set to more than one, the Generation Termination reason will be on multiple lines.
    • If multiple lines are expected, set the data item to be a multiline string.

See Also

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d