I am a Senior Software Engineer with over 8 years of experience building scalable backend platforms, distributed systems, data infrastructure, and full-stack applications across quantitative finance, advertising technology, and large-scale production environments. My expertise lies in low-latency systems, real-time data processing, distributed computing, and ensuring production reliability. I have a strong background in designing and operating high-performance backend services, large-scale data pipelines, and infrastructure that supports mission-critical applications.
Throughout my career, I have demonstrated the ability to work across the full software development lifecycle, from architecture and implementation to deployment, monitoring, and long-term operational support. I collaborate closely with researchers, product teams, and infrastructure organizations to deliver reliable systems focused on performance, scalability, maintainability, and engineering excellence.
Currently, I work at The Voleon Group where I build and maintain production infrastructure supporting large-scale quantitative trading systems and automated trading workflows. I develop low-latency backend services that transform machine learning model signals into real-time automated trading decisions, and design distributed data pipelines with strong performance and reliability requirements.
Previously, I worked at Meta where I designed scalable backend systems supporting advertising measurement and privacy-focused data processing platforms. I built large-scale distributed data pipelines using Spark, Presto, and Kafka, and contributed to privacy-enhancing technologies aligned with industry standards.
At Bloomberg LP, I developed backend infrastructure for fixed-income electronic trading platforms, focusing on latency-sensitive systems with high performance and operational stability. I also have research experience as a Research Assistant at Case Western Reserve University, where I worked on distributed computing, privacy-preserving computation, and large-scale data engineering systems, and served as a teaching assistant for data science and engineering courses.
Build and maintain production infrastructure supporting large-scale quantitative trading systems and automated trading workflows. Develop low-latency backend services transforming machine learning model signals into real-time automated trading decisions. Design and optimize distributed data pipelines and model execution infrastructure with strong performance, reliability, and operational safety requirements. Improve system stability and operational efficiency through enhanced monitoring, observability, fault tolerance, and infrastructure tooling. Collaborate closely with quantitative researchers and machine learning engineers to deploy and operate production-grade ML-driven trading systems at scale. Contribute to backend platform improvements supporting large volumes of real-time market data and model execution traffic. Focus on building highly reliable systems capable of supporting mission-critical production trading environments.
Designed and developed scalable backend systems supporting advertising measurement, ranking, and privacy-focused data processing platforms. Built large-scale distributed data pipelines using Spark, Presto, and Kafka to support analytics, experimentation, and signal aggregation workflows. Worked on privacy-enhancing technologies and anonymization systems aligned with evolving industry privacy standards and Privacy Sandbox initiatives. Improved reliability and operational efficiency of data-intensive services handling large-scale advertising and measurement workloads. Collaborated with cross-functional engineering and infrastructure teams to deliver production-ready distributed systems with high throughput and scalability requirements. Participated in architectural discussions focused on system scalability, data quality, monitoring, and long-term maintainability. Contributed to both backend services and internal tooling supporting operational visibility and engineering productivity.
Developed backend infrastructure and distributed services supporting fixed-income electronic trading platforms and real-time trading workflows. Designed and implemented scalable systems for market data processing, trading operations, and financial infrastructure services. Worked on latency-sensitive backend systems with strong requirements around performance, reliability, and operational stability. Collaborated with trading system engineers and infrastructure teams to improve scalability and production reliability across electronic trading platforms. Built internal infrastructure and engineering tools supporting distributed backend services and production trading environments. Contributed to performance optimization initiatives for systems processing large volumes of financial and market data.
Conducted research focused on distributed computing, privacy-preserving computation, and large-scale data engineering systems. Designed and implemented an efficient privacy-preserving outsourcing framework for large-scale matrix and tensor convolution workloads. Deployed distributed computing solutions on AWS clusters using Spark, MapReduce, and OpenMPI-based parallel processing frameworks. Worked on optimization techniques addressing computational complexity, communication overhead, and distributed I/O performance. Served as teaching assistant and lab instructor for DSCI 133: Data Science and Engineering, covering data pipelines, analytics, predictive modeling, distributed systems, and data security. Developed course materials, tutorials, assignments, and hands-on labs for undergraduate and graduate students.
Jobicy
614 professionals pay to access exclusive and experimental features on Jobicy
Free
USD $0/month
For people just getting started
Plus
USD $8/month
Everything in Free, and: