preprocessing-data-with-automated-pipelines

Process automate data cleaning, transformation, and validation for ML tasks. Use when requesting "preprocess data", "clean data", "ETL pipeline", or "data transformation". Trigger with relevant phrases based on skill purpose. allowed-tools: Read, Write, Edit, Grep, Glob, Bash(cmd:*) version: 1.0.0 author: Jeremy Longshore <jeremy@intentsolutions.io> license: MIT

Allowed Tools

No tools specified

Provided by Plugin

data-preprocessing-pipeline

Automated data preprocessing and cleaning pipelines

ai ml v1.0.0
View Plugin

Installation

This skill is included in the data-preprocessing-pipeline plugin:

/plugin install data-preprocessing-pipeline@claude-code-plugins-plus

Click to copy

Instructions

# Data Preprocessing Pipeline This skill provides automated assistance for data preprocessing pipeline tasks. ## Overview This skill enables Claude to construct and execute automated data preprocessing pipelines, ensuring data quality and readiness for machine learning. It streamlines the data preparation process by automating common tasks such as data cleaning, transformation, and validation. ## How It Works 1. **Analyze Requirements**: Claude analyzes the user's request to understand the specific data preprocessing needs, including data sources, target format, and desired transformations. 2. **Generate Pipeline Code**: Based on the requirements, Claude generates Python code for an automated data preprocessing pipeline using relevant libraries and best practices. This includes data validation and error handling. 3. **Execute Pipeline**: The generated code is executed, performing the data preprocessing steps. 4. **Provide Metrics and Insights**: Claude provides performance metrics and insights about the pipeline's execution, including data quality reports and potential issues encountered. ## When to Use This Skill This skill activates when you need to: - Prepare raw data for machine learning models. - Automate data cleaning and transformation processes. - Implement a robust ETL (Extract, Transform, Load) pipeline. ## Examples ### Example 1: Cleaning Customer Data User request: "Preprocess the customer data from the CSV file to remove duplicates and handle missing values." The skill will: 1. Generate a Python script to read the CSV file, remove duplicate entries, and impute missing values using appropriate techniques (e.g., mean imputation). 2. Execute the script and provide a summary of the changes made, including the number of duplicates removed and the number of missing values imputed. ### Example 2: Transforming Sensor Data User request: "Create an ETL pipeline to transform the sensor data from the database into a format suitable for time series analysis." The skill will: 1. Generate a Python script to extract sensor data from the database, transform it into a time series format (e.g., resampling to a fixed frequency), and load it into a suitable storage location. 2. Execute the script and provide performance metrics, such as the time taken for each step of the pipeline and the size of the transformed data. ## Best Practices - **Data Validation**: Always include data validation steps to ensure data quality and catch potential errors early in the pipeline. - **Error Handling**: Implement robust error handling to gracefully handle unexpected issues during pipeline execution. - **Performance Optimization**: Optimize the pipeline for performance by using efficient algorithms and data structures. ## Integration This skill can be integrated with other Claude Code skills for data analysis, model training, and deployment. It provides a standardized way to prepare data for these tasks, ensuring consistency and reliability. ## Prerequisites - Appropriate file access permissions - Required dependencies installed ## Instructions 1. Invoke this skill when the trigger conditions are met 2. Provide necessary context and parameters 3. Review the generated output 4. Apply modifications as needed ## Output The skill produces structured output relevant to the task. ## Error Handling - Invalid input: Prompts for correction - Missing dependencies: Lists required components - Permission errors: Suggests remediation steps ## Resources - Project documentation - Related skills and commands

Skill file: plugins/ai-ml/data-preprocessing-pipeline/skills/preprocessing-data-with-automated-pipelines/SKILL.md