OpenAI: Chat, Interact with Parameters

OpenAI: Chat, Interact with Parameters

OpenAI: Chat, パラメータ付き対話

Communicates with OpenAI API (ChatGPT). Instructions with conversation history and/or advanced parameters (PROMPT) are possible; Sampling temperature, Sampling Top_P, Presence penalty, Frequency penalty, Logit bias. Model ID used is “gpt-3.5-turbo”.

Auto Step icon
Configs for this Auto Step
AuthzConfU
U: Select HTTP_Authz Setting (Secret API Key as “Fixed Value”) *
StrConfA0
A0: Set Responder Role (SYSTEM Role)#{EL}
StrConfPro1
Pro1: Set 1st Request PROMPT#{EL}
StrConfCom1
Com1: Set 1st Response COMPLETION#{EL}
StrConfPro2
Pro2: Set 2nd Request PROMPT#{EL}
StrConfCom2
Com2: Set 2nd Response COMPLETION#{EL}
StrConfPro3
Pro3: Set 3rd Request PROMPT#{EL}
StrConfCom3
Com3: Set 3rd Response COMPLETION#{EL}
StrConfA1
A1: Set Request Message PROMPT *#{EL}
StrConfA2
A2: Set Parameters (Temp Top_P P-penalty F-penalty) in 4 lines#{EL}
StrConfA3
A3: Set LogitBias (TokenID and Bias value pairs) for each line#{EL}
StrConfA4
A4: Set Number of Responses (default 1)#{EL}
StrConfA5
A5: Set Limit of Response Tokens (default 2048)#{EL}
StrConfA6
A6: Set Stop Words (eg “.”)#{EL}
StrConfB1
B1: Set FieldNames that store COMPLETION for each lines (update)#{EL}
SelectConfB2
B2: If to store Response JSON as a whole, Select STRING (update)
SelectConfC1
C1: Select NUMERIC that stores PROMPT Tokens (update)
SelectConfC2
C2: Select NUMERIC that stores COMPLETION Tokens (update)
SelectConfC3
C3: Select NUMERIC that stores Total Tokens (update)
Script (click to open)
// GraalJS Script (engine type: 2)

//////// START "main()" /////////////////////////////////////////////////////////////////

main();
function main(){ 

////// == Config Retrieving / 工程コンフィグの参照 ==
const strAuthzSetting = configs.get      ( "AuthzConfU" );   /// REQUIRED
  engine.log( " AutomatedTask Config: Authz Setting: " + strAuthzSetting );

const strSystemRole   = configs.get      ( "StrConfA0" );    // NotRequired
const strLogPro1      = configs.get      ( "StrConfPro1" );  // NotRequired
const strLogCom1      = configs.get      ( "StrConfCom1" );  // NotRequired
const strLogPro2      = configs.get      ( "StrConfPro2" );  // NotRequired
const strLogCom2      = configs.get      ( "StrConfCom2" );  // NotRequired
const strLogPro3      = configs.get      ( "StrConfPro3" );  // NotRequired
const strLogCom3      = configs.get      ( "StrConfCom3" );  // NotRequired
const strPrompt       = configs.get      ( "StrConfA1" );    /// REQUIRED
  if( strPrompt     === "" ){
    throw new Error( "\n AutomatedTask ConfigError:" +
                     " Config {A1:Prompt} MUST NOT be empty \n" );
  }

const strParams       = configs.get      ( "StrConfA2" );    // NotRequired
const arrParams       = strParams !== "" ? strParams.split("\n") : null;
const numTemperature  = isNaN(parseFloat(arrParams?.[0])) ? 1 : parseFloat( arrParams[0] );
const numTopP         = isNaN(parseFloat(arrParams?.[1])) ? 1 : parseFloat( arrParams[1] );
const numPresPenalty  = isNaN(parseFloat(arrParams?.[2])) ? 0 : parseFloat( arrParams[2] );
const numFreqPenalty  = isNaN(parseFloat(arrParams?.[3])) ? 0 : parseFloat( arrParams[3] );
// const jsonLogitBias   = arrParams?.[4] ? JSON.stringify( arrParams[4] ) : null;
  // Number(undefined)     // NaN
  // Number(null)          // 0 ☆
  // Number('100a')        // NaN
  // parseFloat(undefined) // NaN
  // parseFloat(null)      // NaN

const strBias         = configs.get      ( "StrConfA3" );    // NotRequired
const arrBias         = strBias !== "" ? strBias.split("\n") : null;
const strChoises      = configs.get      ( "StrConfA4" );    // NotRequired
const numChoises      = isNaN(parseInt(strChoises,10)) ? 1 : parseInt(strChoises,10);
const strLimit        = configs.get      ( "StrConfA5" );    // NotRequired
const numLimit        = isNaN(parseInt(strLimit,10)) ? 2048 : parseInt(strLimit,10);
const strStops        = configs.get      ( "StrConfA6" );    // NotRequired
const arrStops        = strStops !== "" ? strStops.split("\n") : null;
const strQfields      = configs.get      ( "StrConfB1" );    // NotRequired
const arrQfields      = strQfields !== "" ? strQfields.split("\n") : null;

const strPocketResponseJson     = configs.getObject( "SelectConfB2" ); // NotRequired
const numPocketPromptTokens     = configs.getObject( "SelectConfC1" ); // NotRequired
const numPocketCompletionTokens = configs.getObject( "SelectConfC2" ); // NotRequired
const numPocketTotalTokens      = configs.getObject( "SelectConfC3" ); // NotRequired



////// == Data Retrieving / ワークフローデータの参照 ==
// (Nothing. Retrieved via Expression Language in Config Retrieving)


////// == Calculating / 演算 ==

//// OpenAI API > Documentation > API REFERENCE > CHAT
//// https://platform.openai.com/docs/api-reference/chat

/// prepare json
let strJson = {};
    strJson.model = "gpt-3.5-turbo";
    strJson.messages = [];
    if ( strSystemRole !=="" ) {
      let objSystemRole = {};
          objSystemRole.role = "system";
          objSystemRole.content = strSystemRole;
      strJson.messages.push ( objSystemRole );
    }
    if ( strLogPro1 !=="" && strLogCom1 !=="" ) {
      let objLogPro = {};
          objLogPro.role = "user";
          objLogPro.content = strLogPro1;
      strJson.messages.push ( objLogPro );
      let objLogCom = {};
          objLogCom.role = "assistant";
          objLogCom.content = strLogCom1;
      strJson.messages.push ( objLogCom );
    }
    if ( strLogPro2 !=="" && strLogCom2 !=="" ) {
      let objLogPro = {};
          objLogPro.role = "user";
          objLogPro.content = strLogPro2;
      strJson.messages.push ( objLogPro );
      let objLogCom = {};
          objLogCom.role = "assistant";
          objLogCom.content = strLogCom2;
      strJson.messages.push ( objLogCom );
    }
    if ( strLogPro3 !=="" && strLogCom3 !=="" ) {
      let objLogPro = {};
          objLogPro.role = "user";
          objLogPro.content = strLogPro3;
      strJson.messages.push ( objLogPro );
      let objLogCom = {};
          objLogCom.role = "assistant";
          objLogCom.content = strLogCom3;
      strJson.messages.push ( objLogCom );
    }

    let objNewMsg = {};
        objNewMsg.role = "user";
        objNewMsg.content = strPrompt;
    strJson.messages.push ( objNewMsg );

    if ( arrParams?.[0] !=="" ) {
      strJson.temperature       = numTemperature;
    }
    if ( arrParams?.[1] !=="" ) {
      strJson.top_p             = numTopP;
    }
    if ( arrParams?.[2] !=="" ) {
      strJson.presence_penalty  = numPresPenalty;
    }
    if ( arrParams?.[3] !=="" ) {
      strJson.frequency_penalty = numFreqPenalty;
    }

    strJson.n          = numChoises;
    strJson.max_tokens = numLimit;
    strJson.user       = "m" + processInstance.getProcessModelInfoId().toString();
    if ( arrStops !== null ){
      strJson.stop = [];
      for ( let i = 0; i < arrStops.length; i++ ){
        if ( arrStops[i] === "- - -" ){
          strJson.stop.push ( "\n" );
        }else{
          strJson.stop.push ( arrStops[i] );
        }
      }
    }
    if ( arrBias !== null ){
      strJson.logit_bias = {};
      for ( let i = 0; i < arrBias.length; i++ ){
        let arrNumParts = arrBias[i].match( /-?\d+/g ); // numbers (including with minus signs)
        if (arrNumParts.length >= 2) {
          strJson.logit_bias[arrNumParts[0]] = Number(arrNumParts[1]);
        }
      }
    }

/* engine.log( JSON.stringify( strJson ) ); // debug
{
  "model":"gpt-3.5-turbo",
  "messages":[{
      "role":"system",
      "content":"Start with '>>>'. End with '<<<'."
    },{
      "role":"user",
      "content":"What destinations do you recommend?"
    },{
      "role":"assistant","content":">>> As an AI language model ... interests and budget. <<<
    "},{
      "role":"user","content":"What other Japanese destinations ...  the 4?"
    }
  ],
  "top_p":0.8,
  "presence_penalty":0.1,
  "frequency_penalty":0.1,
  "n":3,
  "max_tokens":200,
  "user":"m2933",
  "logit_bias":{
    "52":2,
    "14053":10,
    "16504":5
  }
}
*/

/// prepare request1
let request1Uri = "https://api.openai.com/v1/chat/completions";
let request1 = httpClient.begin(); // HttpRequestWrapper
    request1 = request1.authSetting( strAuthzSetting ); // with "Authorization: Bearer XX"
    request1 = request1.body( JSON.stringify( strJson ), "application/json" );

/// try request1
const response1     = request1.post( request1Uri ); // HttpResponseWrapper
engine.log( " AutomatedTask ApiRequest1 Start: " + request1Uri );
const response1Code = response1.getStatusCode() + ""; // JavaNum to string
const response1Body = response1.getResponseAsString();
engine.log( " AutomatedTask ApiResponse1 Status: " + response1Code );
if( response1Code !== "200"){
  throw new Error( "\n AutomatedTask UnexpectedResponseError: " +
                    response1Code + "\n" + response1Body + "\n" );
}


/// parse response1
/* engine.log( response1Body ); // debug
{
  "id":"chatcmpl-6rcsicnNlNV13EhppZVSXAuB8DLDy",
  "object":"chat.completion",
  "created":1678238864,
  "model":"gpt-3.5-turbo-0301",
  "usage":{
    "prompt_tokens":150,
    "completion_tokens":560,
    "total_tokens":710
  },
  "choices":[{
    "message":{
      "role":"assistant",
      "content":">>> Japan is a beautiful ... to visit in Japan. <<<"
    },
    "finish_reason":"stop",
    "index":0
  },{
    "message":{
      "role":"assistant",
      "content":">>> Japan has many beautiful ... many amazing destinations Japan has to offer. <<<"
    },
    "finish_reason":"stop",
    "index":1
  },{
    "message":{
      "role":"assistant",
      "content":">>> There are many other ... festivals, and local cuisine.\n\nEach of these"
  },
  "finish_reason":"length",
  "index":2
  }]
}
*/
const response1Obj = JSON.parse( response1Body );


////// == Data Updating / ワークフローデータへの代入 ==

if( strPocketResponseJson !== null ){
  engine.setData( strPocketResponseJson, response1Body );
}

for ( let i = 0; i < response1Obj.choices.length; i++ ) {
  if( engine.findDataDefinitionByVarName ( arrQfields?.[i] ) !== null ){
    engine.setDataByVarName( arrQfields[i], 
                  response1Obj.choices[i].message.content ?? ""
                );
  }
}

if( numPocketPromptTokens !== null ){
  engine.setData( numPocketPromptTokens, new java.math.BigDecimal(
                  response1Obj.usage.prompt_tokens ?? 0
                ));
}
if( numPocketCompletionTokens !== null ){
  engine.setData( numPocketCompletionTokens, new java.math.BigDecimal(
                  response1Obj.usage.completion_tokens ?? 0
                ));
}
if( numPocketTotalTokens !== null ){
  engine.setData( numPocketTotalTokens, new java.math.BigDecimal(
                  response1Obj.usage.total_tokens ?? 0
                ));
}
// "??": Nullish coalescing operator (ES11)
// https://developer.mozilla.org/docs/Web/JavaScript/Reference/Operators/Nullish_coalescing

} //////// END "main()" /////////////////////////////////////////////////////////////////


/*
Notes:
- About "OpenAI API"
    - https://platform.openai.com/docs/introduction/overview
    - The OpenAI API can be applied to virtually any task that involves understanding or generating natural language or code.
    - For example, if you give the API the prompt, "Write a tagline for an ice cream shop",
    - it will return a completion like "We serve up smiles with every scoop!"
- If you place this "Addon Automated Step" on the Workflow diagram, 
    - a response will be retrieved automatically when the token reaches the automated step.
    - A request prompt is automatically sent to the OpenAI API server. (REST API communication)
    - The response text sent back from the OpenAI API server is automatically parsed.
    - For example, the auto-step obtaining "advice from AI" can be integrated into business processes.
- API key is required to use the OpenAI API.
    - Obtain the API key to be used in advance.
    - Set the "Secret API Key" as the communication token.[HTTP Authz Settings] > [Token Fixed Value]
- This Automatied Step is fully upward compatible with "OpenAI: Chat, Start".
    - The same requests as in "OpenAI: Chat, Start" are possible.
    - https://support.questetra.com/addons/openai-chat-start-2023/
- Contextual requests are possible.
    - The oldest instruction should be set to "1st PROMPT".
    - The response to that directive should be set to "1st COMPLETION".
    - A history that does not contain both a PROMPT and a COMPLETION will be invalid.
    - Sets of PROMPT and COMPLETION can be registered up to three times.
- It is not necessary to set all parameters.
    - Parameter values can be tried in the Playground.
    - https://platform.openai.com/playground?mode=chat
    - The settable range may differ.
- Parameter 1st line: Sampling temperature (TEMPERATURE)
    - range:"[0,2]", default: "1"
    - "Higher values like 0.8 will make the output more random"
    - Sets the randomness level (creativity).
    - For fact-finding or tabulation, set to zero.
- Parameter 2nd line: Sampling % (top_p)
    - range:"(0,1]", default: "1"
    - "the model considers the results of the tokens with top_p probability mass"
    - Sets the top percentage of word tokens to be considered.
    - It is not recommended to set both "sampling temperature". (as of 202303)
- Parameter 3rd line: Presence_penalty
    - range:"[-2,2]", default: "0"
- Parameter 4th line: Frequency_penalty
    - range:"[-2,2]", default: "0"
- Logit bias: Token adjustment (logit_bias)
    - range:"[-100,100]"
    - eg: "16504 10"
    - eg: "14053 10"
    - eg: "17013 -5"
        - 16504(Japan), 14053(USA), 2937(US), 52(U), 13(.), 50(S), 41187(united) 17013(United)
    - "values like -100 should result in a ban selection of the relevant token."
    - You can ban or overuse certain word Tokens.
        - https://platform.openai.com/tokenizer
- High temperatures (">1") and high biases (">10") may induce 500 errors and timeouts.
    - Determine by trial and error in the light of your work what values are appropriate.

APPENDIX
- The SYSTEM Role setting is used to set the role as the text author.
    - e.g. "End with '<<<'"
    - e.g. "Start with '>>>'. End with '<<<'."
    - e.g. "You are a creative Questetra employee."
    - e.g. "You are a helpful assistant."
        - https://platform.openai.com/docs/guides/chat/introduction
- The number of responses that can be set is up to 128.
- Interrupted words can be set up to four (invalid after the fifth line)
    - If set `\t` or other characters, they will be escaped.
    - To set a newline code (`\n`), set `- - -`. (experimental)
        - The response will always be a single line.
        - It always results in a 500 error. (as of 202303)
- Large amount of allowable response sentences to be generated may exceed the system limit.
    - Generation will be aborted if the allowable amount is reached.
        - `"finish_reason":"length",`
        - If a large number is set for "Number of Responses," the response is more likely to be interrupted.
    - Model usage limit: 4096 (total number of tokens)
        - Set smaller than 4096 minus the expected length of the PROMPT tokens.
    - For English, one word or symbol often counts as one token.
        - For average English, it is about 1 token for 4 characters.
        - In Japanese, a single character may be divided into multiple tokens.
        - In the case of average Japanese, a single character is about one token.
    - You can check the approximate number of word tokens at tokenizer.
        - https://platform.openai.com/tokenizer
- In settings that refer to numeric type data, be careful not to mix in digit separators.
    - The formatting function `#sformat` is useful. (Java String.format)
    - e.g. `#{#sformat("%1.1f", #q_numeric)}` (rounded to one decimal place)
    - "R2272: Output of Strings via EL"
        - https://questetra.zendesk.com/hc/en-us/articles/360024292872-R2272-Output-of-Strings-via-EL-syntax
- Headers for developers belonging to multiple organizations are not yet supported (as of 202303).
    - `OpenAI-Organization`.
- Note that the workflow app ID is automatically assigned to the `user` parameter of the OpenAI API. (experimental)
    - `processInstance.getProcessModelInfoId()`

Notes-ja:
- "OpenAI API" とは
    - https://platform.openai.com/docs/introduction/overview
    - OpenAI API は、自然言語やコードを理解し、自然言語やコードを生成します。あらゆる仕事に適用可能です。
    - たとえば、「アイスクリーム屋のキャッチフレーズを書いて」というプロンプト(Prompt)を与えると、
    - 「全てのスクープを笑顔で!」といった応答(Completion)が返されます。 (Questetra社の意訳)
- この[自動工程]をワークフロー図に配置すれば、案件が到達する度にリクエストが自動送信されます。
    - OpenAI API サーバに対してリクエスト文が自動的に送信されます。(REST API通信)
    - OpenAI API サーバから返信されたレスポンス文が自動的にパース(解析)されます。
    - たとえば、「AIからの助言」を自動的に取得する工程を業務プロセスに組み込むことが出来ます。
- OpenAI API の利用には API key が必要です。
    - あらかじめ API Key を取得しておいてください。
    - "Secret API Key" を通信トークンとしてセットします。[HTTP 認証設定]>[トークン直接指定]
- この[自動工程]は『OpenAI: Chat, 開始』の完全上位互換です。
    - すなわち『OpenAI: Chat, 開始』と同じリクエストも可能です。
    - https://support.questetra.com/ja/addons/openai-chat-start-2023/
- 文脈を使ったリクエストが可能です。
    - もっとも古い指示は『初回指示文』(1st PROMPT)に設定してください。
    - その指示に対する回答は『初回指示文の回答文』(1st COMPLETION)に設定してください。
    - 指示文(PROMPT)と回答文(COMPLETION)がセットになっていない履歴は無効となります。
    - 指示文(PROMPT)と回答文(COMPLETION)のセットは3回分まで登録できます。
- すべてのパラメータを設定する必要はありません。
    - パラメータの値は、Playground にて試行できます。
    - https://platform.openai.com/playground?mode=chat
    - 設定可能範囲が異なる場合があります。
- パラメータ1行目:サンプリング温度(temperature)
    - range:"[0,2]", default:"1"
    - "Higher values like 0.8 will make the output more random"
    - ランダムレベル(創造性度)を設定します。
    - 事実調査や集計の場合はゼロに設定します。
- パラメータ2行目:サンプリング%(top_p)
    - range:"(0,1]", default:"1"
    - "the model considers the results of the tokens with top_p probability mass"
    - 上位何パーセントまでのワードTokenを考慮するかを設定します。
    - 「サンプリング温度」と両方を設定することは推奨されていません。(202303現在)
- パラメータ3行目:再現禁止度(presence_penalty)
    - range:"[-2,2]", default:"0"
- パラメータ4行目:頻出禁止度(frequency_penalty)
    - range:"[-2,2]", default:"0"
- Logitバイアス:ワードToken調整(logit_bias)
    - range:"[-100,100]"
    - eg: "16504 10"
    - eg: "14053 10"
    - eg: "17013 -5"
        - 16504(Japan), 14053(USA), 2937(US), 52(U), 13(.), 50(S), 41187(united) 17013(United)
    - "values like -100 should result in a ban selection of the relevant token."
    - 特定のワードTokenについて、使用禁止にしたり、多用させたりできます。
        - https://platform.openai.com/tokenizer
- 高いサンプリング温度(">1")や高いバイアス(">10")は500エラーやタイムアウトを誘発します。
    - どの程度の設定数値が妥当かは、業務に照らし試行錯誤のうえ決定してください。


APPENDIX-ja
- SYSTEMロール設定には、文章作成者としての役割(ロール)を設定します。
    - e.g. "回答文の先頭に「<<<」を、回答文の末尾に「>>>」を付けて回答してください。"
    - e.g. "語尾には80%の確率で「知らんけど。」をつけます。"
    - e.g. "クリエイティブなQuestetra社社員です。"
    - e.g. "You are a helpful assistant."
        - https://platform.openai.com/docs/guides/chat/introduction
- "レスポンス数" としてセットできる回答数設定は128までです。
- "中断文字" としてセットできる文節は4つまでです。(5行目以降は無効)
    - "中断文字" に `\t` 等を設定した場合、エスケープ処理されます。
    - "中断文字" に改行コード(`\n`)をセットしたい場合、`- - -` をセットします。(試験的)
        - レスポンスは常に一行になります。
        - 必ず 500 エラーになってしまいます。(202303時点)
- レスポンス文の生成許容量を大きく設定した場合、システム制限を超える可能性があります。
    - 文章の途中であっても許容量に到達すれば生成が中断されます。
        - `"finish_reason":"length",`
        - "レスポンス数" に大きな数字をセットすると、レスポンス中断の可能性が高くなります。
    - 合計トークン数には4096のモデル利用制限があります。
        - 指示文の長さ(PROMPTトークン数)の想定値を引いた値より小さい値でセットします。
    - 英語の場合、単語や記号ごとに1トークンになります。
        - 平均的な英語の場合、4文字で1トークン程度になります。
        - 日本語の場合、1文字が複数トークンに分割されることもあります。
        - 平均的な日本語の場合、1文字で1トークン程度になります。
    - ワードトークン数の目安は tokenizer にて確認できます。
        - https://platform.openai.com/tokenizer
- 数値型データを参照する設定では、桁区切り文字が混入しないように注意してください。
    - フォーマット関数 `#sformat` を使うと便利です。(Java String.format)
    - e.g. `#{#sformat("%1.1f", #q_numeric)}` (小数第一位まで四捨五入)
    - "R2272: 文字列としての出力"
        - https://questetra.zendesk.com/hc/ja/articles/360024292872-R2272
- 複数組織に所属する開発者向けのヘッダには未対応です(202303時点)
    - `OpenAI-Organization`
- なお、ワークフローアプリIDが、OpenAI API の `user` パラメータに自動的に代入されます。(試験的)
    - `processInstance.getProcessModelInfoId()`

*/

Download

warning Freely modifiable JavaScript (ECMAScript) code. No warranty of any kind.
(Installing Addon Auto-Steps are available only on the Professional edition.)

Notes

  • About OpenAI API
    • https://platform.openai.com/docs/introduction/overview
    • The OpenAI API can be applied to virtually any task that involves understanding or generating natural language or code.
    • For example, if you give the API the prompt, “Write a tagline for an ice cream shop”,
    • it will return a response like “We serve up smiles with every scoop!”
  • If you place this Add-on Automated Step on the Workflow diagram,
    • a response will be retrieved automatically when the token reaches the automated step.
    • A request prompt is automatically sent to the OpenAI API server. (REST API communication)
    • The response text sent back from the OpenAI API server is automatically parsed.
    • For example, the auto-step obtaining advice from AI can be integrated into business processes.
  • An API key is required to use the OpenAI API.
    • Obtain the API key to be used in advance.
    • Set the Secret API Key as the communication token.[HTTP Authz Settings] > [Token Fixed Value]
  • This Automatied Step is fully upward compatible with “OpenAI: Chat, Start”.
  • Contextual requests are possible.
    • The oldest instruction should be set to “1st PROMPT”.
    • The response to that directive should be set to “1st COMPLETION”.
    • A history that does not contain both a PROMPT and a COMPLETION will be invalid.
    • Sets of PROMPT and COMPLETION can be registered up to three times.
  • It is not necessary to set all parameters.
  • Parameter 1st line: Sampling temperature (TEMPERATURE)
    • range: “[0,2]”, default: “1”
    • Higher values like 0.8 will make the output more random
    • Sets the randomness level (creativity).
    • For fact-finding or tabulation, set to zero.
  • Parameter 2nd line: Sampling % (top_p)
    • range: “(0,1]”, default: “1”
    • The model considers the results of the tokens with top_p probability mass
    • Sets the top percentage of word tokens to be considered.
    • It is not recommended to set both “sampling temperature”. (as of 202303)
  • Parameter 3rd line: Presence_penalty
    • range:”[-2,2]”, default: “0”
  • Parameter 4th line: Frequency_penalty
    • range:”[-2,2]”, default: “0”
  • Logit bias: Token adjustment (logit_bias)
    • range:”[-100,100]”
    • eg: “16504 10”
    • eg: “14053 10”
    • eg: “17013 -5”
      • 16504(Japan), 14053(USA), 2937(US), 52(U), 13(.), 50(S), 41187(united) 17013(United)
    • Values like -100 should result in a ban selection of the relevant token.
    • You can ban or overuse certain word Tokens.
  • High temperatures (“>1”) and high biases (“>10”) may induce 500 errors and timeouts.
    • Determine by trial and error in the light of your work what values are appropriate.

Capture

Communicates with OpenAI API (ChatGPT). Instructions with conversation history and/or advanced parameters (PROMPT) are possible; Sampling temperature, Sampling Top_P, Presence penalty, Frequency penalty, Logit bias. Model ID used is "gpt-3.5-turbo".

Appendix

  • The SYSTEM Role setting is used to set the role as the text author.
  • The number of responses that can be set is up to 128.
  • Interrupted words can be set up to four (invalid after the fifth line)
    • If set \t or other characters, they will be escaped.
    • To set a newline code (\n), set - - -. (experimental)
      • The response will always be a single line.
      • It always results in a 500 error. (as of 202303)
  • Large amount of allowable response sentences to be generated may exceed the system limit.
    • Generation will be aborted if the allowable amount is reached.
      • "finish_reason":"length",
      • If a large number is set for “Number of Responses,” the response is more likely to be interrupted.
    • Model usage limit: 4096 (total number of tokens)
      • Set smaller than 4096 minus the expected length of the PROMPT tokens.
    • For English, one word or symbol often counts as one token.
      • For average English, it is about 1 token for 4 characters.
      • In Japanese, a single character may be divided into multiple tokens.
      • In the case of average Japanese, a single character is about one token.
    • You can check the approximate number of word tokens at tokenizer.
  • In settings that refer to numeric type data, be careful not to mix in digit separators.
  • Headers for developers belonging to multiple organizations are not yet supported (as of 202303).
    • OpenAI-Organization.
  • Note that the workflow app ID is automatically assigned to the user parameter of the OpenAI API. (experimental)
    • processInstance.getProcessModelInfoId()

See Also

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top

Discover more from Questetra Support

Subscribe now to keep reading and get access to the full archive.

Continue reading