We’d love to hear from you if you are looking for a tech company with the following:
- Huge market ($800 billion market size), we are building the first AI-powered super app to help people own a car;
- Open, transparent, merit-based culture;
- Hyper growth - 10X revenue growth in 2020;
- Best user experience: #1 ranked app in the insurance comparison category;
- Strong leadership: from Amazon, Microsoft, Facebook, Nvidia, Alibaba, etc. and rockstar colleagues;
- A series C with $130M+ total financing, backed by top VCs such as Y-Combinator, Goodwater, SV Angel, Funders Club, and Bow Capital, etc.
Jerry is the first Super App for car owners to help them save money on all their car expenses (insurance, loans, repairs, etc) Having built the #1 ranked app a.nd the fastest growth app in the insurance comparison category, we are tackling other areas of car ownership and looking for engineering talents to join us in expanding our product offerings. Headquartered in Silicon Valley, CA, we have offices in the U.S., China, and Canada.
A few examples of the exciting projects that we are working on:
- Create a smart prediction engine for customer’s insurance coverage needs
- Build predictive models on customer purchase behavior based on a large data set
- Use telematics tracking to build customer driving risk profile
- Owner of the core company data pipeline, responsible for scaling up data processing flow to meet the rapid data growth
- Consistently evolve data model & data schema based on business and engineering needs
- Implement systems tracking data quality and consistency
- Develop tools supporting self-service data pipeline management (ETL)
- SQL and MapReduce job tuning to improve data processing performance
- 3+ years of data engineering experience within a rigorous engineering environment
- Proficient in SQL, specially with Postgres dialect.
- Expertise in Python for developing and maintaining data pipeline code.
- Experience with Apache Spark and PySpark library (experience with AWS extension of PySpark is a plus).
- Experience with BI software (preferably Metabase or Tableau).
- Experience with Hadoop (or similar) Ecosystem.
- Experience with deploying and maintaining data infrastructure in the cloud (experience with AWS preferred).
- Comfortable working directly with data analytics to bridge business requirements with data engineering