When hiring, we look for candidates who can thrive in our culture of trust, feedback, and rapid growth. We believe that diversity and inclusivity are essential to our success, and we provide equal employment opportunities regardless of background or identity. Our opportunities support remote, hybrid, or onsite work at our offices in New York City, San Francisco, or Silicon Valley, and we’re dedicated to creating an environment where all employees can do their best work and contribute to the growth of our platform.
Our engineering team at OpenSea is in search of a strong and curious Data Engineer to take charge of our analytics and machine learning pipelines. As a member of our data engineering team, you will collaborate with other engineers, data analysts, data scientists, and product managers, contributing significantly to the growth of one of the most rapidly expanding NFT marketplaces in the Web3 ecosystem.
- Design, build, and maintain data pipelines from end-to-end, ensuring data accuracy, availability, and quality for the Analytics and Data Science teams
- Collaborate closely with Data Scientists to understand data requirements, develop data models, and optimize data pipelines for advanced analytics and machine learning use cases
- Develop and maintain scalable, efficient, and reliable ETL processes, using best practices for data ingestion, storage, and processing
- Work with stakeholders to identify and prioritize analytics requirements, and build out necessary analytics tools and dashboards
- Proactively monitor data pipelines, troubleshoot, and resolve data-related issues
- Contribute to the continuous improvement of data engineering practices, including documentation, code reviews, and knowledge sharing
- 5+ years of experience in data engineering
- Experience with big data technologies such as Snowflake, Hadoop, Spark, Airflow, or Flink
- Strong knowledge of AWS services, particularly those related to data storage, processing, and analytics (e.g., S3, Redshift, Glue, EMR, Kinesis, Lambda, and Athena)
- Expert in SQL and proficiency in at least one programming language (Python, Go, Java)
- Familiarity with data warehousing concepts and schema design principles (e.g., Star Schema, Snowflake Schema)
- Strong problem-solving skills, a data-driven mindset, and a passion for working with large, complex datasets
- Excellent communication and collaboration skills, with the ability to work effectively across teams and stakeholders
If you don’t think you meet all of the criteria below but still are interested in the job, please apply. Nobody checks every box, and we’re looking for someone who is excited to join the team.