Predictions

This guide covers everything you need to know about making predictions with the ViewAI Python SDK.

Overview

The ViewAI SDK provides two types of predictions:

  • Single-point predictions: Real-time predictions for individual data points

  • Batch predictions: Asynchronous predictions for large datasets

Quick Start

from viewai_client import ViewAIClient

# Initialize client
client = ViewAIClient(api_key="your-api-key")

# Single prediction
result = client.execute_single_point_prediction(
    data={"age": 35, "income": 50000},
    model_id="model-123"
)

# Batch prediction
job = client.execute_batch_prediction(
    data=[{"age": 35}, {"age": 42}],
    model_id="model-123",
    wait_for_completion=True
)

Single-Point Predictions

Single-point predictions are ideal for real-time inference where you need immediate results for individual data points.

Basic Usage

Using the Service Directly

Input Data Format

Data must be provided as a dictionary with feature names as keys:

Validating Input Data

Batch Predictions

Batch predictions are designed for processing large datasets asynchronously. They're ideal for scoring thousands or millions of records.

Basic Usage

Using Lists Instead of DataFrames

Asynchronous Batch Processing

Monitoring Job Progress

Retrieving Batch Results

Results are available in the ViewAI dashboard:

Interpreting Results

Understanding the Prediction Class

The Prediction class provides methods to extract prediction information:

Dictionary-Style Access

Classification Results

Classification models return probabilities for each class:

Regression Results

Regression models return predicted values:

Error Handling

Common Prediction Errors

Handle prediction errors gracefully:

Handling Invalid Model IDs

Handling Missing Features

Batch Prediction Error Handling

Performance Considerations

When to Use Single vs. Batch Predictions

Use single-point predictions for:

  • Real-time inference (< 100ms latency required)

  • Interactive applications

  • Individual record scoring

  • Low-volume prediction workloads

Use batch predictions for:

  • Large datasets (>1000 records)

  • Periodic scoring jobs

  • Offline analysis

  • High-volume workloads where latency isn't critical

Optimizing Single Predictions

Optimizing Batch Predictions

Caching Predictions

For repeated predictions on the same data:

Best Practices

1

Set Default Model ID

Configure a default model ID to simplify code:

2

Validate Input Data

Always validate data before making predictions:

3

Handle None Results

Always check for None results:

4

Use Appropriate Data Types

Ensure data types match model expectations:

5

Monitor Prediction Latency

Track prediction performance:

6

Use Batch for Bulk Operations

For multiple predictions, prefer batch operations:

7

Implement Retry Logic

Add retry logic for transient failures:

See Also

  • Training Models - Learn how to train models

  • Batch Processing - Advanced batch prediction techniques

  • Error Handling - Comprehensive error handling guide

  • API Reference - Detailed API documentation

Was this helpful?