Meet our Job Legitimacy Checker โ€” manually verified job postings you can trust โ€บ

Data Engineer

Rate, USD
Not specified
Work schedule
Full Time,
Language skills
English
Available for Hire
Yes
Hire me

Hire me

Jobicy Account
You need an employer account to access the resume database.
Log in to save

About me

I am a Data Engineer with over 5 years of experience designing, building, and optimizing end-to-end ETL pipelines, data warehouses, and cloud-based data solutions across GCP, Azure, and AWS. Skilled in SQL, Python, R, and DBT, I transform complex datasets into actionable insights that empower decision-making and operational efficiency. I have architected scalable data solutions and automated workflows using BigQuery, Snowflake, Azure Synapse, and Databricks, streamlining processing across multiple production systems. My expertise in Power BI, Tableau, and Looker enables interactive reporting and visualization for various departments, while my experience with Airflow, Control-M, and Kafka ensures reliable and event-driven data pipelines.

Passionate about solving complex data challenges, I thrive in Agile environments, collaborating with cross-functional teams to enhance data quality, model reliability, and business intelligence. I have successfully migrated enterprise data workflows to cloud platforms, automated data ingestion and processing, and developed forecasting models to support proactive decision-making. I am adept at integrating modern DevOps practices such as CI/CD pipelines and version control to streamline deployment and maintenance.

Throughout my career, I have demonstrated strong skills in cloud platforms including Azure, GCP, and AWS, and have hands-on experience with various data engineering tools and orchestration frameworks. I have also contributed to academic environments as a Teaching Assistant, guiding students through practical labs on SQL, ETL workflows, and data modeling. My commitment to continuous learning is reflected in my certifications and participation in workshops and hackathons.

I am seeking opportunities in Data Engineering, Data Analytics, ETL Development, Business Intelligence, and Data Analysis roles where I can leverage my cloud, analytics, and BI expertise to drive measurable impact. I am confident that my technical skills, combined with my collaborative approach and problem-solving mindset, will add significant value to any data-driven organization.




Education

08/2023 โ€“ 05/2025 Master of Science in Computer Information Technology @ Purdue University Northwest
08/2016 โ€“ 04/2020 Bachelor of Technology in Computer Science Engineering @ Jawaharlal Nehru Technological University

Experience

Aug 2025 โ€“ Current Data Analyst/Engineer @ McKesson

Designed and developed ETL workflows using Azure Data Factory, Apache NiFi, and Snowflake, optimizing ingestion of 500M+ records monthly and accelerating business intelligence reporting. Implemented PySpark transformations in Databricks, reducing data processing time and enabling near real-time analytics for operations and finance teams. Built and maintained Tableau dashboards and SQL-based pipelines, streamlining KPI monitoring and reducing ad performance reporting time. Collaborated with cross-functional stakeholders to define KPI logic, develop scalable data models, and enable self-service reporting across commercial and marketing analytics teams.

Aug 2024 โ€“ Dec 2024 Teaching Assistant @ Purdue University Northwest

Led 11 structured lab sessions on PostgreSQL and DBeaver, teaching relational joins, subqueries, and indexing techniques using 3 practice databases. Demonstrated cloud-based ETL workflows using Azure Data Factory and BigQuery, guiding 32 students through 5 ingestion pipelines built for class projects. Evaluated 28 SQL and data modeling assignments, scoring 6 schema design tasks and 4 optimization challenges per submission for final grading. Integrated Apache Airflow and Azure Synapse Analytics into 3 hands-on labs, simulating orchestration tasks and warehouse query execution across 2 course modules.

Nov 2020 โ€“ Aug 2023 Assistant System Engineer/Data Engineer @ Tata Consultancy Services

Migrated 10 enterprise data workflows from Netezza to BigQuery using Ab Initio, applying source-to-target schema mapping to support readiness across 3 environments. Orchestrated 12 ETL pipelines in Control-M, automating nightly data transfers and reducing manual reruns by 8 per week. Provisioned BigQuery and AWS S3 buckets for secure archive storage and configured Cloud Shell access, streamlining data delivery for 6 downstream systems. Constructed CI/CD hooks using GitHub Actions to auto-deploy migration configs across Dev, UAT, and Prod, reducing manual deployment steps by 18 per cycle. Transformed 40 legacy SQL queries for BigQuery schema compatibility, reducing syntax-related parsing errors by 22 during QA cycles. Applied UNIX shell scripting and Ab Initio to standardize backend batch jobs across 15 structured datasets, reducing job failure frequency by 5 per week. Integrated Kafka with GCP Pub/Sub for ingesting operational logs during test runs, enabling detection of 2 critical alert triggers pre-production. Automated post-deployment notifications using AWS Lambda and documented rollback protocols in Confluence, accelerating onboarding for 2 offshore teams.

Jan 2020 โ€“ Oct 2020 Data Analyst @ ADANI

Automated ingestion of real-time sensor logs from 7 grid substations using GCP Dataflow, consolidating telemetry in BigQuery to accelerate fault detection and inventory alerts. Engineered a Python forecasting model with Pandas and NumPy to predict energy load spikes, enabling proactive transformer dispatch across 3 energy zones. Developed 5 interactive Power BI dashboards with DAX, visualizing outage trends and maintenance schedules for 8 control centers, improving decision-making for operations teams. Investigated failure patterns and lead times for 40+ critical grid components using SQL and Python, streamlining spare inventory planning and reducing manual tracking effort. Standardized lifecycle data for 120+ equipment records and created 8 metadata templates in Azure Data Lake and BigQuery, improving data consistency for preventive maintenance and audits.

Jan 2020 โ€“ Oct 2020 Data Analyst Intern @ ADANI

Queried and cleaned 12 equipment datasets from PostgreSQL and MySQL, and performed staging in Azure Data Lake for reporting automation and schema alignment. Constructed 2 Tableau dashboards to visualize downtime, throughput, and energy usage across 5 logistics hubs, improving visibility for the asset maintenance team. Generated 7 weekly Excel summary reports using PivotTables and VLOOKUP, supporting lifecycle analysis and performance tracking for grid transformers.


Recommend this talent

Recommend this specialist

Jobicy+ Subscription

Jobicy

592 professionals pay to access exclusive and experimental features on Jobicy

Free

USD $0/month

For people just getting started

  • • Unlimited applies and searches
  • • Access on web and mobile apps
  • • Weekly job alerts
  • • Access to additional tools like Bookmarks, Applications, and more

Plus

USD $8/month

Everything in Free, and:

  • • Ad-free experience
  • • Daily job alerts
  • • Personal career consultant
  • • AI-powered job advice
  • • Featured & Pinned Resume
  • • Custom Resume URL
Go to account โ€บ