Job Purpose
- Identifying, designing, and implementing internal process improvements including re-designing infrastructure, construct data architectures for greater scalability, optimizing data delivery, and automating manual processes
- Collaborate with data analysts and scientists to make data easily accessible and usable for insights and analytics.
Job Responsibilities
- Responsible for building and optimizing data pipeline and data storage architecture to facilitate optimal functioning of the data analysis modules within the cloud infrastructure.
- Responsible for data processing, data model building in Datalake/Azure Synapse by Databrick tool for structured data, semi unstructured data, and unstructured data.
- Building analytical tools to utilize the data pipeline, providing actionable insight into key business performance metrics including operational efficiency and customer acquisition.
Job Accountability
- Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Oversee the resolution of architecture challenges and ensure data quality and integrity.
- Assembling large, complex sets of data that meet non-functional and functional business requirements.
- Works closely with all business units to develop strategy for long term data platform architecture.
- Stay current on industry trends and technologies to drive innovation and continuous improvement.
- Create, document, and implement unit test plans, scripts, and test to ensure that logic and syntax are correct, and that program results are accurate.
Job Requirements
Qualifications:
- University degree in Computer Science, Engineering and/or a technically oriented field
- Over 05 years of experience with Flink/Spark, Databricks
- Over 05 years of experience with Azure (DP200 and/or DP201, DP203 certification acts as a plus)
- Passionate about analytics machine learning technology & applications and eager to learn.
- English communication
Knowledge and skill:
- Knowledge of Big Data technologies, such as Spark, Hadoop/MapReduce
- Knowledge of Azure services like Storage Account, Azure DataBricks etc.
- Good knowledge of SQL and excellent coding skills.
- Strong knowledge of data modeling and data mining.
- Working knowledge of various ML/DL applications such as Keras, Tensorflow, Python scikit learn and R
- Self-Development, communication, problem-Solving Skills.
- Open-minded, multi-tasking, teamwork, flexible and interest to learn new things.
- Able to using SSIS on Azure cloud.
- Experienced implementing an ETL solution that supports incremental data extraction and loading.
- Deploy and Configure SSIS packages, with strong skill set for SSIS.
- Able to debug and Troubleshoot SSIS packages.
- Competencies /Leadership Capability Model – Intermediate
- Analytical thinking
- Analytical and report writing skills
- Communication skills
Skills
Functions
Toàn thời gian
Company
28 active jobs
Số 37 Tôn Đức Thắng, phường Sài Gòn, Thành phố Hồ Chí Minh, Việt Nam
Industry:
Ready to Apply?
Submit your application now and take the next step in your career journey.
Similar Jobs