Your Role
- Building data processing based on jointly defined requirements along the data pipeline under the concern of quality and having alert in place
- Improve scalability, speed, accuracy of the existing data pipelines
- Works closely with both the data architecture, data scientist, and product owners to help them leverage data solutions in their products
- Providing guidance on suitable options, designing, and creating data pipeline for the analytical solutions data lake or data warehouse to specific micro services
- Supporting the bug fixing and performance analysis along the data pipeline
- Bring experience of Data WareHouses, and Data Lakes to development team
- Be a strong advocate for a culture of process and data quality across development teams
Your Profile
- At least a Bachelor's Degree in Computer Sciences, Software Engineering or equivalent experience
- Experience building big data pipelines with Spark, Spark Streaming or big data frameworks
- Strong programming skills such as Scala, Java, Python
- Strong analytical and problem-solving skills
- A good understanding in data structures or data architecture.
- Good communication skills. Be able to convey the complex idea effectively
- Can do attitude, service orientation, and self-learning skill; superb team player
- Ability to provide development solutions in accordance with the Software Development Life-Cycle methodologies such as SCRUM and Agile
Bonus Skill: Telecom industry and data model knowledge
Bonus Skill: Experience in SQL, NoSQL, Linux, Shell Scripting, Apache Airflow, Databricks or Cloud platform such as Google, AWS