Services
This document provides comprehensive documentation for all service classes in the ViewAI Python SDK. Services follow a service-oriented architecture pattern, encapsulating specific domains of functionality.
Overview
The ViewAI SDK provides specialized service classes for different operations:
PredictionService: Execute predictions and manage inference jobs
ModelTrainingService: Train and manage machine learning models
WorkspaceManager: Manage workspace resources
ProjectManager: Manage project resources
HealthChecker: Monitor API health and connectivity
ModelRegistry: Search, compare, and manage deployed models
PredictionService
Service for model prediction and inference operations.
Initialization
PredictionService(http_client, api_client)Parameters:
http_client: Configured HTTP client for API requestsapi_client: Legacy API client for backward compatibility
Note: Typically accessed via client.prediction_service property.
execute_prediction(data, model_id, **options)
Execute a prediction (automatically routes to single or batch).
Parameters:
data(dict or list): Data to predict ondict: Single data point for immediate prediction
list: Multiple data points for batch prediction
model_id(str): Model ID to use for prediction**options: Additional prediction options (reserved for future use)
Returns:
Predictionobject for single predictionsBatchPredictionJobfor batch predictionsNoneif prediction fails
Raises:
ValueError: If data format is not supported or model_id is missing
Examples:
execute_single_point_prediction(data, model_id)
Execute prediction for a single data point.
Parameters:
data(dict): Single data point as dictionary of feature valuesmodel_id(str): Model ID to use for prediction
Returns: Prediction object or None
Raises:
ValueError: If data is empty or model_id not provided
Example:
Note:
Data is validated using DataPoint class
Results returned immediately (synchronous)
Failed predictions return None with error logged
execute_batch_prediction(data, model_id, wait_for_completion=True)
Execute prediction for multiple data points (batch).
Parameters:
data(list or DataFrame): List of dictionaries or DataFrame with multiple data pointsmodel_id(str): Model ID to use for predictionswait_for_completion(bool): If True, polls until job completes. If False, returns job immediately.
Returns: BatchPredictionJob object or None
Raises:
ValueError: If data is empty or model_id not provided
Example:
Note:
Data is uploaded as CSV to cloud storage
Job execution is asynchronous
Results available in dashboard after completion
retrieve_batch_job_status(job_id)
Retrieve the status of a batch prediction job.
Parameters:
job_id(str): Unique identifier of the batch prediction job
Returns: Dictionary with job status information or None
Example:
monitor_batch_job_until_complete(job, poll_interval=10, max_wait_time=None)
Monitor a batch prediction job until completion.
Parameters:
job(BatchPredictionJob): BatchPredictionJob object to monitorpoll_interval(int): Seconds between status checks (default: 10)max_wait_time(int, optional): Maximum seconds to wait. If None, uses MAX_POLLING_ITERATIONS * poll_interval
Returns: Updated BatchPredictionJob with final status
Example:
Note:
Displays progress bar during monitoring
Updates job.status in place
Logs timeout if max wait time exceeded
validate_prediction_data(data, model_id)
Validate data against model schema before prediction.
Parameters:
data(dict): Data point to validatemodel_id(str): Model ID to validate against
Returns: True if data is valid, False otherwise
Example:
ModelTrainingService
Service for model training operations.
Initialization
Note: Typically accessed via client.training_service property.
initiate_training_job(...)
Initiate a new model training job.
Parameters:
dataset(DataFrame): DataFrame containing training data with target columntarget_column(str): Name of the target/label column in datasetworkspace(Workspace or str): Workspace object or workspace ID stringproject(Project or str, optional): Optional Project object or project ID stringmodel_name(str): Name for the trained model (default: "Default Model")description(str): Description of the model/training job (default: "No description provided")wait_for_completion(bool): If True, monitors job until completion (default: True)
Returns: TrainingJob object or None
Raises:
ValueError: If required parameters are missing or invalid
Example:
Note:
Training is asynchronous and may take several minutes
Dataset is uploaded as CSV to cloud storage
Progress displayed via progress bar if waiting
retrieve_training_job_status(job_id)
Retrieve the current status of a training job.
Parameters:
job_id(str): Unique identifier of the training job (dashboard_id)
Returns: Dictionary with job status information or None
Example:
Note: Status values include: pending, training, training_completed, failed
monitor_training_job_until_complete(job, poll_interval=10, max_wait_time=None)
Monitor a training job until completion or timeout.
Parameters:
job(TrainingJob): TrainingJob object to monitorpoll_interval(int): Seconds between status checks (default: 10)max_wait_time(int, optional): Maximum seconds to wait
Returns: Updated TrainingJob with final status
Example:
Note:
Displays progress bar during monitoring
Updates job.status in place
Prints completion message with dashboard link
retrieve_training_job_results(job_id)
Retrieve results from a completed training job.
Parameters:
job_id(str): Unique identifier of the completed training job
Returns: Dictionary with training results and metrics or None
Example:
Note:
Only works for completed training jobs
Results structure depends on model type
WorkspaceManager
Service for workspace operations.
Initialization
Note: Typically accessed via client.workspace_manager property.
retrieve_default_workspace()
Retrieve the default workspace for the authenticated user.
Returns: Workspace object or None
Example:
Note: Returns the first workspace from the user's accessible workspaces list.
retrieve_workspace_by_name(workspace_name)
Retrieve a workspace by its unique name.
Parameters:
workspace_name(str): Name of the workspace to retrieve
Returns: Workspace object or None
Raises:
ValueError: If workspace_name is empty or None
Example:
Note: Workspace names are case-sensitive.
retrieve_workspace_by_id(workspace_id)
Retrieve a workspace by its unique identifier.
Parameters:
workspace_id(str): Unique identifier of the workspace
Returns: Workspace object or None
Raises:
ValueError: If workspace_id is empty or None
Example:
list_accessible_workspaces()
List all workspaces accessible to the authenticated user.
Returns: List[Workspace]
Example:
workspace_exists(workspace_name)
Check if a workspace with the given name exists.
Parameters:
workspace_name(str): Name of the workspace to check
Returns: True if workspace exists, False otherwise
Example:
count_accessible_workspaces()
Count the number of workspaces accessible to the user.
Returns: Number of accessible workspaces (int)
Example:
ProjectManager
Service for project operations.
Initialization
Note: Typically accessed via client.project_manager property.
retrieve_project_by_name(project_name)
Retrieve a project by its unique name.
Parameters:
project_name(str): Name of the project to retrieve
Returns: Project object or None
Raises:
ValueError: If project_name is empty or None
Example:
Note: Project names are case-sensitive.
retrieve_project_by_id(project_id)
Retrieve a project by its unique identifier.
Parameters:
project_id(str): Unique identifier of the project
Returns: Project object or None
Example:
list_accessible_projects(workspace_id=None)
List all projects accessible to the authenticated user.
Parameters:
workspace_id(str, optional): Optional workspace ID to filter projects
Returns: List[Project]
Examples:
project_exists(project_name)
Check if a project with the given name exists.
Parameters:
project_name(str): Name of the project to check
Returns: True if project exists, False otherwise
Example:
count_accessible_projects(workspace_id=None)
Count the number of projects accessible to the user.
Parameters:
workspace_id(str, optional): Optional workspace ID to filter count
Returns: Number of accessible projects (int)
Example:
search_projects_by_name_pattern(pattern)
Search for projects matching a name pattern.
Parameters:
pattern(str): Search pattern (case-insensitive substring match)
Returns: List[Project] of matching projects
Example:
HealthChecker
Service for API health monitoring and diagnostics.
Initialization
Note: Typically accessed via client.health property.
check_connection()
Check basic connection to API.
Returns: HealthCheckResult with connection status
Example:
check_authentication()
Check API authentication.
Returns: HealthCheckResult with authentication status
Example:
check_api_endpoints()
Check multiple API endpoints.
Returns: Dictionary mapping endpoint names to HealthCheckResult objects
Example:
run_diagnostics()
Run comprehensive diagnostics.
Returns: Dictionary with all diagnostic results
Example:
measure_latency(num_requests=5)
Measure API latency.
Parameters:
num_requests(int): Number of requests to make (default: 5)
Returns: Dictionary with latency statistics (min, max, avg, median)
Example:
test_network_reliability(num_attempts=10)
Test network reliability with multiple requests.
Parameters:
num_attempts(int): Number of test requests (default: 10)
Returns: Dictionary with reliability statistics
Example:
ModelRegistry
Service for listing, searching, and managing deployed models.
Initialization
Note: Typically accessed via client.registry property.
list_models(workspace_id=None, project_id=None, limit=None)
List deployed models.
Parameters:
workspace_id(str, optional): Filter by workspace IDproject_id(str, optional): Filter by project IDlimit(int, optional): Maximum number of models to return
Returns: List[Dict] of model dictionaries
Example:
get_model(model_id)
Get model by ID.
Parameters:
model_id(str): Model/dashboard ID
Returns: Model dictionary or None if not found
Example:
search_models(name=None, tags=None, created_after=None, created_before=None)
Search models by criteria.
Parameters:
name(str, optional): Search by model name (partial match)tags(List[str], optional): Filter by tagscreated_after(datetime, optional): Filter by creation date (after)created_before(datetime, optional): Filter by creation date (before)
Returns: List[Dict] of matching models
Example:
get_model_metadata(model_id)
Get model metadata.
Parameters:
model_id(str): Model/dashboard ID
Returns: Dictionary with model metadata
Example:
compare_models(model_ids)
Compare multiple models.
Parameters:
model_ids(List[str]): List of model IDs to compare
Returns: DataFrame comparing models
Example:
get_model_stats(workspace_id=None)
Get model statistics.
Parameters:
workspace_id(str, optional): Filter by workspace ID
Returns: Dictionary with model statistics
Example:
get_deployment_history(workspace_id=None, days=30)
Get deployment history.
Parameters:
workspace_id(str, optional): Filter by workspace IDdays(int): Number of days of history (default: 30)
Returns: DataFrame with deployment history
Example:
find_models_by_name(name_pattern, case_sensitive=False)
Find models by name pattern (partial match).
Parameters:
name_pattern(str): Pattern to search for in model namescase_sensitive(bool): Whether search should be case sensitive (default: False)
Returns: List[Dict] of matching models
Example:
find_recent_models(days=7)
Find models created in the last N days.
Parameters:
days(int): Number of days to look back (default: 7)
Returns: List[Dict] of recent models
Example:
BatchPredictionJob
Represents a batch prediction job.
Attributes
job_id(str): Unique identifier for the batch prediction jobmodel_id(str): Model ID used for predictionsstatus(str): Current status of the jobdashboard_url(str): URL to view job results
Example:
TrainingJob
Represents a model training job.
Attributes
job_id(str): Unique identifier for the training jobmodel_name(str): Name of the model being trainedworkspace_id(str): Workspace where model is being trainedstatus(str): Current status of the training jobdashboard_url(str): URL to view training progress/results
Example:
See Also
ViewAIClient - Main client class
Model Classes - Model deployment and prediction classes
Configuration - Configuration options
Exceptions - Exception handling
Was this helpful?