Read our new white paper: Generative AI in Christian Evangelism

View Categories

Overview

Apologist Fusion is an API that allows developers to integrate our Christian apologetics Agents with their own applications. Apologist Fusion is fully compatible with the OpenAI chat completions API specification. This means that our API is a like-for-like swap for any application currently using the popular LLM chat completions API signature, and should be plug and play with any of the many OpenAI SDKs in the language of your choice.

Minimal Request Example #

Since your Agent is configured on the platform, API calls to get completions can be extremely minimal. All that’s required is the prompt for which you want a response:

curl \
--header 'x-api-key: apg_xxxxxxxxxxxxxxxxxxxxxxxxxxxx' \
--header 'Content-Type: text/plain' \
--data '{
    "prompt": "How can a good God allow so much evil in the world?"
}' \
--url https://my.gospel.bot/api/v1/chat/completions

Please note: you must replace the x-api-key value with your API key and my.gospel.bot with your Agent’s domain.

Full Request Example #

However, the API supports overriding the default Agent configuration options, as well as several other runtime options. Here’s an example of all supported parameters that have an effect on the output:

curl \
--header 'x-api-key: apg_xxxxxxxxxxxxxxxxxxxxxxxxxxxx' \
--header 'Content-Type: application/json' \
--data '{
    "model": "openai/gpt/4o",
    "stream": false,
    "messages": [
        {
            "role": "system",
            "content": "This a system prompt override."
        },
        {
            "role": "user",
            "content": "This is a previous prompt."
        },
        {
            "role": "assistant",
            "content": "This is a previous completion."
        },
        {
            "role": "user",
            "content": "This is the current prompt."
        }
    ],
    "response_format": { 
        "type": "json" 
    },
    "metadata": {
        "anonymous": true,
        "conversation": null,
        "language": "en",
        "session": null,
        "translation": "esv"
    },
    "frequency_penalty": 0.25,
    "presence_penalty": -0.25,
    "max_completion_tokens": 1024,
    "reasoning_effort": "high",
    "temperature": 0.5,
    "top_p": 0.9,
    "user": null
}' \
--url https://my.gospel.bot/api/v1/chat/completions

Please note: you must replace the x-api-key value with your API key and my.gospel.bot with your Agent’s domain.

Authorization #

All requests to our chat completion endpoints must include an x-api-key header with a valid API key value. API keys may be provisioned against an Agent, and those keys will be specific to that Agent. Be sure to use the domain of your custom Agent when making an API call; API keys are validated against the custom Agent, which is identified by URL.

Contact us to get a custom Agent and API key to use.

Agent Selection #

All custom Agents are available at a unique domain. When making an API request, the domain of the endpoint will dictate which Agent is used to respond.

Prompt and Messages #

You must either supply the prompt string or a messages array.

The messages parameter is an array of message objects, each with a role and a content property. By default, an Agent will have a system prompt that is automatically applied. Also by default, previous exchanges between an Agent and user as defined by the metadata.session, metadata.conversation, and user parameters are prepended to the array of messages which are eventually sent to the LLM. You may control how many past exchanges you wish Agent to use by passing an integer for the metadata.max_memories parameter. You only need supply a messages array if you wish to override this default behavior.

If you want to use the aforementioned defaults, or the response doesn’t require context from previous exchanges with Agent, you may simply provide a prompt string.

If you wish to prevent any context of previous exchanges altogether, regardless of the value of metadata.session, metadata.conversation, and user — you may pass metadata.anonymous as true.

  • user: any string identifier for the user (100 characters max)
  • metadata.conversation: any string identifier for the conversation within a single session (100 characters max)
  • metadata.session: any string identifier for the user’s session (100 characters max)
  • metadata.anonymous: if set to true, no past exchange context is provided to the Agent (one-shot); equivalent to setting metadata.max_memories: 0
  • metadata.max_memories: the number of previous exchanges to provide to Agent

Streaming vs Non-Streaming #

The stream parameter controls whether the response should be streamed or delivered all at once. Typically, chat interfaces benefit from streaming output as the user gets more instantaneous feedback. However, there are use cases that lend themselves to non-streaming output as well.

Response Format #

The response format may be specified using the response_format.type parameter. Valid values are as follows:

  • text: plain text output [default]
  • html: HTML formatted, converted from markdown where applicable
  • json: structured JSON response; includes completion as well as token usage and timing stats. Only available if the stream option is false.

Bible Translation #

The metadata.translation parameter indicates to the Agent which Bible translation is preferred to use. The following is the list of supported Bible translations at this time:

  • esv: English Standard Version [default]
  • niv: New International Version
  • bsb: Berean Study Bible
  • kjv: King James Version
  • net: New English Translation
  • nkjv: New King James Version
  • nlt: New Living Translation
  • csb: Christian Standard Bible
  • nasb: New American Standard Bible

Language #

Use the metadata.language parameter to control the output language. Language support varies by model and is specific to a given Agent. However, any Agent can be upgraded to utilize real-time translation, expanding its language support to 192 languages. Here is the full list of supported languages:

  • en: English [default]
  • ab: Abkhaz
  • ace: Acehnese
  • ach: Acholi
  • af: Afrikaans
  • sq: Albanian
  • alz: Alur
  • am: Amharic
  • ar: Arabic
  • hy: Armenian
  • as: Assamese
  • awa: Awadhi
  • ay: Aymara
  • az: Azerbaijani
  • ban: Balinese
  • bm: Bambara
  • ba: Bashkir
  • eu: Basque
  • btx: Batak Karo
  • bts: Batak Simalungun
  • bbc: Batak Toba
  • be: Belarusian
  • bem: Bemba
  • bn: Bengali
  • bew: Betawi
  • bho: Bhojpuri
  • bik: Bikol
  • bs: Bosnian
  • br: Breton
  • bg: Bulgarian
  • bua: Buryat
  • yue: Cantonese
  • ca: Catalan
  • ceb: Cebuano
  • ny: Chichewa (Nyanja)
  • zh: Chinese (Simplified)
  • zh-TW: Chinese (Traditional)
  • cv: Chuvash
  • co: Corsican
  • crh: Crimean Tatar
  • hr: Croatian
  • cs: Czech
  • da: Danish
  • din: Dinka
  • dv: Divehi
  • doi: Dogri
  • dov: Dombe
  • nl: Dutch
  • dz: Dzongkha
  • eo: Esperanto
  • et: Estonian
  • ee: Ewe
  • fj: Fijian
  • tl: Filipino (Tagalog)
  • fi: Finnish
  • fr: French
  • fr-CA: French (Canadian)
  • fy: Frisian
  • ff: Fulfulde
  • gaa: Ga
  • gl: Galician
  • lg: Ganda (Luganda)
  • ka: Georgian
  • de: German
  • el: Greek
  • gn: Guarani
  • gu: Gujarati
  • ht: Haitian Creole
  • cnh: Hakha Chin
  • ha: Hausa
  • haw: Hawaiian
  • he: Hebrew
  • hil: Hiligaynon
  • hi: Hindi
  • hmn: Hmong
  • hu: Hungarian
  • hrx: Hunsrik
  • is: Icelandic
  • ig: Igbo
  • ilo: Iloko
  • id: Indonesian
  • ga: Irish
  • it: Italian
  • ja: Japanese
  • jw: Javanese
  • kn: Kannada
  • pam: Kapampangan
  • kk: Kazakh
  • km: Khmer
  • cgg: Kiga
  • rw: Kinyarwanda
  • ktu: Kituba
  • gom: Konkani
  • ko: Korean
  • kri: Krio
  • ku: Kurdish (Kurmanji)
  • ckb: Kurdish (Sorani)
  • ky: Kyrgyz
  • lo: Lao
  • ltg: Latgalian
  • la: Latin
  • lv: Latvian
  • lij: Ligurian
  • li: Limburgan
  • ln: Lingala
  • lt: Lithuanian
  • lmo: Lombard
  • luo: Luo
  • lb: Luxembourgish
  • mk: Macedonian
  • mai: Maithili
  • mak: Makassar
  • mg: Malagasy
  • ms: Malay
  • ms-Arab: Malay (Jawi)
  • ml: Malayalam
  • mt: Maltese
  • mi: Maori
  • mr: Marathi
  • chm: Meadow Mari
  • mni-Mtei: Meiteilon (Manipuri)
  • min: Minang
  • lus: Mizo
  • mn: Mongolian
  • my: Myanmar (Burmese)
  • nr: Ndebele (South)
  • new: Nepalbhasa (Newari)
  • ne: Nepali
  • nso: Northern Sotho (Sepedi)
  • no: Norwegian
  • nus: Nuer
  • oc: Occitan
  • or: Odia (Oriya)
  • om: Oromo
  • pag: Pangasinan
  • pap: Papiamento
  • ps: Pashto
  • fa: Persian (Farsi)
  • pl: Polish
  • pt: Portuguese
  • pt-BR: Portuguese (Brazil)
  • pa: Punjabi
  • pa-Arab: Punjabi (Shahmukhi)
  • qu: Quechua
  • rom: Romani
  • ro: Romanian
  • rn: Rundi
  • ru: Russian
  • sm: Samoan
  • sg: Sango
  • sa: Sanskrit
  • gd: Scots Gaelic
  • sr: Serbian
  • st: Sesotho
  • crs: Seychellois Creole
  • shn: Shan
  • sn: Shona
  • scn: Sicilian
  • szl: Silesian
  • sd: Sindhi
  • si: Sinhala (Sinhalese)
  • sk: Slovak
  • sl: Slovenian
  • so: Somali
  • es: Spanish
  • su: Sundanese
  • sw: Swahili
  • ss: Swati
  • sv: Swedish
  • tg: Tajik
  • ta: Tamil
  • tt: Tatar
  • te: Telugu
  • tet: Tetum
  • th: Thai
  • ti: Tigrinya
  • ts: Tsonga
  • tn: Tswana
  • tr: Turkish
  • tk: Turkmen
  • ak: Twi (Akan)
  • uk: Ukrainian
  • ur: Urdu
  • ug: Uyghur
  • uz: Uzbek
  • vi: Vietnamese
  • cy: Welsh
  • xh: Xhosa
  • yi: Yiddish
  • yo: Yoruba
  • yua: Yucatec Maya
  • zu: Zulu

Model #

Agent supports dozens of models across many providers using a standardized naming scheme: creator/family/variant. Pass the model name via the model parameter.

Limited Models #

Low-latency, inexpensive models with very limited reasoning abilities.

  • google/gemma/9b
  • mistral/mixtral/8x7b
  • openai/gpt/4o-mini
  • anthropic/claude3.5/haiku
  • meta/llama3.3/70b-specdec
  • meta/llama3.3/70b-versatile
  • alibaba/qwen2.5/32b
  • mistral/small/24b

Standard Models #

Economic models that balance lower cost with better responses

  • apologist/aquinas/v4
  • mistral/mixtral/8x22b
  • alibaba/qwen2.5/72b
  • microsoft/wizardlm/8x22b
  • deepseek/deepseek/v3
  • openai/gpt/o1-mini
  • openai/gpt/o3-mini

Premium Models #

Higher quality output at a premium price point.

  • meta/llama3.1/405b
  • 01ai/yi/large
  • xai/grok/2
  • openai/gpt/4o
  • anthropic/claude3.5/sonnet

Reasoning Models #

High-latency but extremely high quality output.

  • anthropic/claude3.7/sonnet
  • deepseek/deepseek/r1

Reserved Models #

Only available upon request at special pricing.

  • openai/gpt/o1
  • anthropic/claude3/opus
  • openai/gpt/4.5

Standard LLM Parameters #

Agent supports standard LLM parameters that modify the behavior of the model:

  • frequency_penalty
  • presence_penalty
  • max_completion_tokens
  • reasoning_effort (for reasoning models only)
  • temperature
  • top_p

See OpenAI’s docs for more information about these parameters.

Ineffectual Passthrough Parameters #

There are some parameters that do not directly affect Agent’s response, but we pass them along to the target LLM for your convenience in order to be fully compatible with OpenAI’s chat completions endpoint:

  • audio
  • logit_bias
  • logprobs
  • modalities
  • n
  • parallel_tool_calls
  • prediction
  • seed
  • service_tier
  • stop
  • store
  • stream_options
  • tools
  • tool_choice

Agent does not currently support multi-modal input or output. Agent supports returning only a single response at a time. Agent does not currently support the use of external tools.