(send resume to [email protected]/* */ , intern needed as well)
The big data team in Huami's US office, is looking for experienced engineer
with big data background to join the team.
Work closely with internal teams and partners, to identity key requirements,
and come with the best solution to address their needs.
Since team is small, you may work on different areas of data processing
pipeline, like platform setup, system tuning and troubleshooting, data
ingestion, ETL, data mining, machine learning, virtualization and reporting,
(Our technical stack is AWS + open source)
5+ years overall experience is desired. Hadoop, Spark and AWS experience is
Programming language: Java (preferred), Python. Script language is plus.
Hands-on experience with Hadoop or Spark, such as HDFS, YARN, Hive, HBase,
Parquet, Spark Streaming, MLlib.
Hands-on experience with cloud infrastructure is big plus, such as AWS EC2,
Kinesis, DynamoDB, RDS, S3, Redshift.
Experience of setup, monitoring, tuning application and big data system, is
also hiring front end intern