JOB DESCRIPTION
- Create and maintain optimal data pipeline architecture;
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies;
- Ensure internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.;
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies;
- Ensure internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.;
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
JOB REQUIREMENT
- Bachelor's degree or higher in an engineering field (Computer Science, Computer Engineering, etc);
- Advanced working SQL knowledge and experience working with a variety of databases;
- Programming experience one or more application or systems languages (Go, Python, Ruby, Java, etc);
- Experience building and optimizing data pipelines, architectures and large-scale data processing (Data Warehousing, Search, Real-time Dash-boarding);
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.;
- Experience extending and implementing core functionality and libraries in data processing platforms (Hive/Pig UDFs, Spark / Spark SQL, Storm Bolts, etc.);
- An incredible desire to be part of a team that delivers impact results everyday;
- A commitment to writing understandable, maintainable, and reusable software;
- Well versed in software and data design patterns;
- Willingness to learn new languages and methodologies;
- An enormous sense of ownership;
- Advanced working SQL knowledge and experience working with a variety of databases;
- Programming experience one or more application or systems languages (Go, Python, Ruby, Java, etc);
- Experience building and optimizing data pipelines, architectures and large-scale data processing (Data Warehousing, Search, Real-time Dash-boarding);
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.;
- Experience extending and implementing core functionality and libraries in data processing platforms (Hive/Pig UDFs, Spark / Spark SQL, Storm Bolts, etc.);
- An incredible desire to be part of a team that delivers impact results everyday;
- A commitment to writing understandable, maintainable, and reusable software;
- Well versed in software and data design patterns;
- Willingness to learn new languages and methodologies;
- An enormous sense of ownership;