We advance data science and technology to bring financial inclusion for all, on a scale never before possible. In just two years, Trusting Social has become the biggest and fastest-growing credit scoring platform and B2B2C loan marketplace in Southeast Asia, with more than 50% of new to bank personal loans in Vietnam going through us. We are on a trajectory to score a billion people, with our platform launching in other Asian countries including Indonesia and India. Funded by Sequoia Capital, we have been profitable since 3 months after first launch in Vietnam. Headquartered in Singapore, our teams are distributed among Ho Chi Minh, Melbourne, Jakarta, Manila, Bangalore, and Mumbai.

The company is led by a dynamic group of seasoned entrepreneurs, technologists and scientists seeking to disrupt the lending space and democratize access to capital for billions of underserved consumers.

Top 3 Reasons To Join Us

  • Reshape the Future of Banking
  • Be Part of an Unicorn Startup
  • Top Salary, Awesome Benefits

The Job

Trusting Social is looking for a passionate and committed Big Data Engineer who will work on the Data processing platform (Trust-Data) for collecting, storing, processing, and analysing huge data sets. The primary focus will be on designing and developing optimal solutions to launch and use them for the deep analysis by the Data Scientists of Trusting Social. Post the launch the ownership and onus of maintaining, enhancing, and monitoring lies with you. You will also be responsible for integrating them with the architecture used across the company. Why is this job key to Trusting Social - The data sets that are ingested and transformed by the Data engineering team form the crux of the source for the Data scientists at Trusting Social, who would then run their complex algorithms and routines on the quality output enabled by the Trust-Data Engineering team. 

Key Job Responsibilities : 

A Data engineer in TrustinSgocial works on the data pipeline infrastructure that is veritably the backbone of our business. In a day's job you would be writing elegant functional SCALA code to crunch TBs of data on Hadoop clusters, mostly using Spark. In a week's job, you would be owning a data pipeline deployment to clusters : on-prem or AWS or Azure or more. In a month's job, you would be managing Hadoop clusters right from security to reliability to HA. Did we mention we are building a pluggable,unified data-lake from scratch? Time to time you have new challenges for automating and scaling tasks for Data science team. We constantly look to improve our framework and pipelines, hence learning on the job is sort of a given. This is the big picture of our big data system. 

Our expertise and requirements include but are not limited to Spark, Scala, HDFS, Yarn, Hive, Kafka, Distributed Systems, Python, Datastores (Relational and NoSql) and Airflow.

Your Skills and Experience 

  • Proficiency with Hadoop v2, MapReduce, HDFS
  • Experience with Spark and its latest features
  • Experience with integration of data from multiple data sources
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
  • Knowledge of various ETL techniques and frameworks, such as Flume
  • Experience with various messaging systems, such as Kafka or RabbitMQ
  • Experience with Big Data ML tool kits, such as Mahout, SparkML, or H2O
  • Good understanding of Lambda Architecture, along with its advantages and drawbacks
  • Experience with Cloudera and Horton works
  • BS or MS in CS
  • At least 2 years of experience as software architect for large-scale enterprise software solutions

 

Why You'll Love Working Here

  • Top market salary
  • Generous comprehensive optional health insurance package for family
  • Free premium gym membership
  • Free Grab to work
  • Housing allowance if moved closer to office

 

Trần Công Danh

ITEC Career Centre (ICC) International Training and Education Center - ITEC

Tel: (84)-8-38303625 (Ext 114) | Fax: (84)-8-38325926 | Email: career.center@itec.hcmus.edu.vn

Leave a comment