(We are looking for immediate joiners )
About us
DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions
Data Engineering and Delivery @DataWeave
We the Delivery / Data engineering team at DataWeave, deliver the Intelligence with actionable data to the customer. One part of the work is to write effective crawler bots to collect data over the web, which calls for reverse engineering and writing scalable python code. Other part of the job is to crunch data with our big data stack / pipeline. Underpinnings are: Tooling, domain awareness, fast paced delivery, and pushing the envelope.
How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! Read more on Become a DataWeaver
What do we offer?
● Some of the most challenging data problems. Huge text and image datasets that you can play with!
● Ability to see the impact of your work and the value you're adding to our customers almost immediately.
● Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you.
● A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.
● Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
● Last but not the least, competitive salary packages and fast paced growth opportunities.
Relevant set of skills:
● Good communication and collaboration skills with 2-7 years of experience.
● Ability to code and script with strong grasp of CS fundamentals, excellent problem solving abilities.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Good understanding of RDBMS
● Experience in building Data pipelines and processing large datasets.
● Knowledge of building crawlers and data mining is a plus.
● Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra (data stores) would be a plus.
● Expert in Python programming
Role and responsibilities: