Data Engineer (Data Warehouse Developer)
Apply now »Date: Mar 27, 2026
Location: Pune, IN
Company: PACCAR
We are looking for an enthusiastic and inquisitive Data Engineer (Data Warehouse Developer) in DAF who is responsible for designing and building the data warehouse environment that supports analytics and reporting. They focus on data integration, transformation, and ensuring data is accurate and accessible for decision-making. Data warehouse developer maintains the ETL process for the data warehouse, from AWS dropzone till Snowflake reporting tables.
Key Responsibilities –
- Design and develop solutions to meet the business and IT requirements
- Design and implement data warehouse schemas such as star, snowflake, or galaxy schemas.
- Create and maintain logical and physical data models aligned with reporting and analytics need
- Develop, test, and maintain ETL (Extract, Transform, Load) processes to load data from various sources into the data warehouse.
- Ensure data quality, consistency, and integrity during the ETL process.
- Optimize ETL processes for performance and scalability, reliability and cost efficient and configure monitoring.
- Tune SQL queries and database performance to ensure fast data retrieval and reporting.
- Work closely with business analysts, data architects, it architects and BI developers to deliver compliant data solutions
- Communicate with stakeholders to gather requirements and provide updates on development progress.
- Coordinate with business stakeholders to understand the requirements and relevant user stories
- Understand and communicate technical needs effectively with business, internal users within the teams.
- Work as part of an Agile Team to deliver projects that meet customer expectations and achieve desired business benefits
Education Requirements:
- Bachelor’s degree in Computer Science, Information Technology, or related fields.
Knowledge & Skills REQUIRED:
- 7-10+ years of hands-on IT technical experience.
- Expertise working on RDBMS: Snowflake
- Working knowledge of AWS services like S3, Lambda etc.
- Databases: NoSQL, Cloud databases
- ETL: Informatica, Talend, SSIS
- Programming: Python, scripting languages
- DevOps & Source Control: Git, Github, DevOps pipelines (Jenkins, Azure DevOps)
- Data Modelling: Star schema, Snowflake schema
- Reporting & BI: Tableau
- Data Quality: Validation, cleansing
- Monitoring & Performance Optimization: Performance analysis and tuning
- Understanding of Python or other programming languages.
- Working knowledge of GIT/Github for source management and pipelines.
Technical / Professional Experience Desired:
- Databases: NoSQL, Cloud databases
- ETL: Big Data ETL (Spark, Hadoop)
- Programming: Advanced DevOps scripting
- Cloud: AWS Redshift, Azure Synapse, GCP BigQuery
- DevOps & Source Control: Docker, Kubernetes, advanced CI/CD
- Data Modelling: Advanced data modelling techniques
- Reporting & BI: Power BI, Looker
- Data Quality: Data governance and compliance
- Monitoring & Performance Optimization: Solution Monitoring, Operational Dashboard and Alert Management
COMPETENCIES AND BEHAVIORS:
- Excellent problem-solving skills and written and verbal communication skills.
- Able to work effectively in a team environment with little or no supervision.
- Able to prioritize and track multiple initiatives concurrently.
- Embody and promote PACCAR ITD values of Teamwork, Continuous Improvement, Commitment, Openness, and Learning
Job Segment:
Data Warehouse, Warehouse, Information Technology, Computer Science, Developer, Technology, Manufacturing