I am a highly skilled Data Engineer with over 6 years of experience in building scalable data pipelines, optimizing data processing workflows, and utilizing advanced data analytics tools to drive business insights. My expertise spans across AWS, Azure, and GCP, allowing me to create seamless data solutions that support large-scale operations. I am passionate about harnessing the power of data to solve complex problems and deliver actionable insights.
Throughout my career, I have worked with cutting-edge technologies such as Snowflake, Databricks, and Power BI to transform raw data into meaningful business intelligence. I have a strong background in ETL development, data governance, and machine learning, and I pride myself on creating data-driven solutions that enhance decision-making and improve operational efficiency.
I thrive in collaborative, cross-functional environments and enjoy partnering with teams to design innovative data architectures and pipelines. Iβm always looking to improve processes and explore new technologies to ensure systems are efficient, secure, and scalable. With a keen eye for data quality and a passion for continuous learning, I am eager to contribute my expertise to impactful projects and drive business success.
Gained expertise in programming languages (C, C++, Java, Python) and developed core data management and analytics skills.
Blockchain exposure and hands-on projects involving Big Data and data analysis to model and analyze complex datasets.
Built cloud-based solutions using AWS and GCP, and learned to optimize data storage and data processing for high performance.
Focused on developing robust data platforms and participated in projects requiring attention to detail and clear communication with team members.
β’Designed and implemented large-scale data pipelines and batch processing systems using Scala, Spark, and Hadoop for scalable data operations.
β’Worked with distributed data processing systems to ensure efficient data management and faster processing times.
β’Customized SSRS report output formats based on user preferences to enhance user experience and reporting flexibility.
β’Utilized Azure Event Grid for real-time event processing, enabling instant data updates and reducing latency.
β’Developed Power BI reports with dynamic visuals and slicers, automating reporting processes and reducing manual data handling hours.
β’Collaborated on building fault-tolerant data processing systems to handle distributed datasets.
β’Automated Databricks workspace deployments using Infrastructure as Code (IaC) tools like ARM and Terraform, streamlining development workflows.
β’Managed SSAS perspectives for user-specific views and multilingual reporting, improving data accessibility for global teams.
β’Automated data ingestion and processing workflows using cloud technologies, ensuring seamless integration and scalability.
β’Designed and implemented caching strategies to improve query performance, reducing load times for end users.
β’Architected and implemented a cloud-based data integration framework leveraging Azure and GCP, improving data synchronization and availability across platforms.
β’Designed and implemented event-driven architectures and distributed data processing systems to handle large-scale data operations efficiently.
β’Managed data synchronization within Azure Data Factory to streamline data workflows.
β’Collaborated with executive leadership to develop a centralized data warehouse strategy, improving reporting accuracy and decision-making.
β’Optimized Power BI reports by implementing advanced analytics (forecasting, clustering, sentiment analysis) and enhancing performance for faster load times.
β’Integrated data lakes with Azure Functions for server less data processing, reducing infrastructure complexity.
β’Used Python to extract and transform data from APIs, databases, and files.
β’Created dynamic SSRS report subscriptions for targeted delivery, enhancing reporting efficiency.
β’Designed and deployed data processing systems on Azure, utilizing tools like Databricks, Spark, and SQL for high-volume data operations.
β’Conducted data cleansing and normalization to ensure consistency and accuracy across datasets.
β’Implemented predictive modeling solutions to improve user targeting and drive ROI.
β’Led A/B testing to validate ad relevance strategies, optimizing campaign performance.
β’ Developed Spark-based data pipelines using Scala for large-scale distributed processing.
β’Designed and optimized scalable ETL pipelines in Snowflake, leveraging Snow park for custom data transformations and improving system scalability.
β’Developed data processing solutions using Spark and other distributed computing technologies for large-scale datasets.
β’Managed cross-functional collaborations with data science and analytics teams to develop a centralized data platform, streamlining access to critical business insights.
β’Collaborated with leadership teams to define and refine data integration strategies, ensuring alignment with business objectives.
β’Utilized AWS services for data storage and processing, integrating AWS Glue and S3 to manage large-scale ETL pipelines.
β’Collaborated with Azure Purviewβs AI to implement predictive data quality management solutions.
β’Utilized Python for data cleaning and transformation to ensure data integrity.
β’Developed CI/CD pipelines using Git for automated build, testing, and deployment.
β’Defined and refined data integration strategies in alignment with business objectives, leveraging Snowflake for secure data sharing and scalable data integration.
β’Implemented Azure Synapse user-defined roles and RBAC policies to streamline data access and security.
β’Configured Power BI workspaces for efficient collaboration and reporting across teams.
β’Managed Azure Data Factory pipeline monitoring and alerts to ensure operational efficiency.
β’Implemented Power BI data security measures and compliance standards to protect sensitive data.
β’Developed scalable data models supporting business growth, automating processes to reduce manual errors and improve operational efficiency, in alignment with Gorgiasβ vision for continuous improvement.
β’Built predictive models to forecast customer behavior, optimizing ad performance and relevance in collaboration with product teams.
β’Worked with data lakes, data warehouses, and distributed systems to optimize query performance, enabling faster reporting and accurate insights.
β’ Migrated terabyte-scale data from legacy systems to Databricks, enhancing query performance and system scalability. Integrated data health metrics and observability tools to enhance governance and data reliability.
β’ Developed and enforced data governance policies, integrating security measures to ensure data integrity, compliance, and protection of sensitive customer information per industry regulations.
Jobicy
541 subscribers are already enjoying exclusive, experimental and pre-release features.
Free
USD $0/month
For people just getting started
Plus
USD $8/month
Everything in Free, and: