UnifyAI Model Comparison
The UnifyAI Model Comparison Flow is designed to automatically compare the outputs of different AI models in response to the same prompt. This flow allows for the selection of multiple models, sending a prompt to each, and then recording and logging the results for analysis.
This flow is particularly useful in applications such as model performance benchmarking, content generation, and AI model evaluation.
You can find this template in the Services Catalog under these categories:
AI, Contextual Basics, Enrichment
What's Included
1 Flow
1 Object Type
1 Connection
What You'll Need
Access to the UnifyAI API
API Key for the UnifyAI service
Ideas for Using the UnifyAI Model Comparison Flow
AI Model Benchmarking
Use this flow to compare the performance of various AI models on the same prompt. This can help in determining which model provides the most accurate or useful output for a specific application.
Content Generation Optimization
Test different AI models for content generation tasks. By comparing the outputs, you can choose the best model for generating marketing content, articles, or creative writing.
Research and Development
For teams involved in AI development, this flow can be used to compare new models with established ones, providing insights into how improvements in models affect output quality.
Flow Overview
Flow Start
The flow begins by injecting a test prompt and selecting up to three AI models for comparison.
Prepare Models
The flow then prepares the models and their respective prompts, splitting the request into individual payloads for each model.
Send Prompts to API
Each model's prompt is sent to the UnifyAI API. The API processes the prompts and returns a response for each model.
Process API Responses
The responses from the API are processed and formatted to be saved as records in the system for further analysis.
Record Creation
The formatted responses are stored as new records in the system, including details about the prompt, the AI model used, and the model's response.
Error Handling
Any errors encountered during the flow are captured and logged for troubleshooting.
Flow End
The flow concludes once the records have been successfully created or any errors have been logged.
UnifyAI Model Comparison Flow Details
Inbound Send to Agent Events
Nodes:
contextual-start
,link out
Purpose: The flow begins by receiving a start signal, typically initiated by an external event or agent.
In-Editor Testing
Nodes:
Test Prompt
,Prepare Models
,Split Models
Purpose: Allows for testing different prompts and models directly within the editor. The prompt and selected models are prepared and split into individual payloads for further processing.
Code Example: Prepare Models Function
Explanation:
This function prepares the payloads for each selected model by creating an array of objects. Each object contains a prompt and the corresponding model to be used.
The prepared payloads are then returned for further processing in the flow.
Send Prompt and Receive Responses
Nodes:
Prepare Prompt
,Prompt UnifyAI
,UnifyAI Model Response
,link out
Purpose: The prepared prompts are sent to the UnifyAI API. Each model's response is logged and passed on for further processing.
Code Example: Prepare Prompt Function
Explanation:
This function constructs the payload that will be sent to the UnifyAI API. It specifies the model to be used and formats the prompt in the required structure.
The payload is then passed to the next node in the flow, where it will be sent to the API.
Format Responses & Create Records
Nodes:
Prepare Record Data
,Create AI Response Record
,Create AI Response Record Log
,link out
Purpose: The responses from each model are formatted into a consistent structure and saved as records in the system, logging key details for each model's performance.
Code Example: Prepare Record Data Function
Explanation:
This function formats the API response into a structured object that can be easily stored as a record. It includes the original prompt, the AI model used, the response generated by the model, and the computational cost.
The formatted object is then passed on to be saved as a record in the system.
Error Handling
Nodes:
catch
,Error Catch Log
,contextual-error
Purpose: Catches any errors that occur during the flow and logs them for review, ensuring that issues can be identified and resolved.
Flow End
Nodes:
contextual-end
,link in
Purpose: The flow completes its process, either after successfully creating records or after logging any errors that occurred.
Summary of Flow
Flow Start: Initiate the flow with a test prompt and model selection.
Data Preparation: Split and prepare the models and prompts for API interaction.
API Interaction: Send each model's prompt to UnifyAI and log the responses.
Record Creation: Format and store the model responses as records for analysis.
Error Handling: Capture and log any errors that occur during the process.
Flow End: Conclude the flow after records are created or errors are logged.
Last updated