Bachelor of Science in Computer Science, Engineering, Mathematics, Statistics, or related subject
2-5 years of experience in developing modern data pipelines and applications for analytics (e.g., BI, reporting, dashboards) and advanced analytics (e.g., machine learning, deep learning) use cases
Expertise in SQL and the Python programming language
Expertise designing and implementing data pipelines using modern data engineering approach and tools: SQL, Python, Delta Lake, Databricks, Spark, Glue, Nifi, Streamsets, cloud-native DWH (BigQuery, Snowflake, Redshift), Kafka/Confluence, Presto/Dremio/Athena
Experience with developing solutions on cloud computing services and infrastructure with AWS
Experience with database development using a variety of relational, NoSQL, and cloud database technologies
Worked with BI tools such as Tableau, Qlick, PowerBI or cloud-native ones such as Looker, QuickSight
Conceptual knowledge of data and analytics, such as dimensional modeling, ETL, reporting tools, data governance, DWH, etc.
Exposure to machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics
Excellent communication, listening, and influencing skills