About the team:
The Monetize Reporting and Analytics team is responsible for developing and maintaining reporting and data processing pipelines, and building reporting tools for Xandr Monetize clients. We work closely with core data pipeline teams at Xandr to leverage our data and present it to users of Xandr Monetize.
What you will work on:
As a member of the Monetize Reporting and Analytics team, you will be responsible for data in the our system from when leaves our raw log files, as it is aggregated throughout our pipelines, and presented to stakeholders.
This can take many forms including, but not limited to:
Writing MapReduce or Spark jobs to aggregate raw data into an OLAP database, such as Vertica
Writing SQL to present reports to stakeholders
Working on our internal tools and processes which enable the data pipeline
Developing new UIs or other tools to enable reporting
Investigating stakeholder-reported issues through our entire data pipeline
Implementing strategies that analyze the impact of, and ensure the quality, performance, and accuracy of reports
3+ years of professional experience and demonstrated strong skills in the following areas:
Object-Oriented software development using a language such as Java, Go, C++
Developing performant SQL and SQL Aggregation queries
Solid Computer Science fundamentals with regards to data structures, algorithms, time complexity, etc.
Writing well-tested code, deploying code safely, and working in a team with coding standards, including unit testing, functional testing, working with build systems, and participating code and design reviews
Nice to Have Skills:
An ideal candidate will have a BA/BS degree OR other advanced degree in Computer Science or related field as well as experience and familiarity in the following areas:
Working in the Ad-Tech space
Debugging and troubleshooting stakeholder-reported issues that span multiple complex systems
Excellent technical communication skills, including building internal documentation
Frontend technologies (JavaScript, React)
Big data frameworks including Hadoop/HDFS/MapReduce and Spark
Stream-based passed processing models, distributed streaming platforms like Kafka
Experience with BI tools such as PowerBI and Imply