Data Engineers
Working at Atlassian
Atlassian can hire people in any country where we have a legal entity. Assuming you have eligible working rights and a sufficient time zone overlap with your team, you can choose to work remotely or from an office (unless it’s necessary for your role to be performed in the office). Interviews and onboarding are conducted virtually, a part of being a distributed-first company.
JOB DUTIES:
Build data lake, maintain big data pipelines/services and facilitate the movement of billions of messages each day through writing proficient, scalable and modular code applying strong programming skills, such as Python, Java, and/or Scala. Work directly with the business stakeholders and platform and engineering teams to enable growth and retention strategies through development of conceptual models as well as logical and physical models for a database or a microservices to understand all required entities and perform related system maintenance. Help business partners understand the intricacies of data and build systems that generate insights. Work with Analytics teams to understand business rules and apply the rules to build data pipelines which map and aggregate data across various sources and help teams discover and act on metrics for improvement. Help stakeholder teams ingest data from several internal and third-party tools faster into data lake, by making data pipelines more efficient. Build data pipelines and micro-service framework in Python for collecting events across various company domains, enrich/transform incoming events, route them to appropriate Kinesis streams and persist them in Dynamo, S3. Build analytics pipelines for systems to process incoming data and build aggregations using SQL, Spark/Hive, Python data bricks clusters. Build serverless Lambda frameworks in Python for various event driven systems. Work on an Amazon Web Services (AWS) based data lake backed by the full suite of open-source projects such as Presto, Spark, Airflow and Hive, and other streaming technologies to process large volumes of streaming data. Build highly reliable services through managing and orchestrating a multi-petabyte scale data lake. Write queries on large partitioned datasets stored as Parquet files in S3. Build Spark notebooks in Python on data bricks cluster to help teams gain quick insights on various metrics. Develop reports, visualizations in Tableau to facilitate decision making. Respond to technical questions within team channels. Perform data modeling and apply knowledge of data warehousing concepts. Use Normalization, Snowflake/Star schema approaches to build models in relational data warehouses by assessing user querying patterns and accounting for table growth over time. Build models in a NoSQL system like DynamoDB by determining access patterns, hot spotting and WCU/RCU. Write SQL queries on large partitioned datasets, structure data and data storage practices to tune queries and improve their performance/runtime.
MINIMUM REQUIREMENTS:
Bachelor’s degree in Computer Science, Business Analytics or a related field of study plus two (2) years’ data engineering experience applying strong programming skills, such as Python, Java, and/or Scala, including experience writing SQL, structuring data, and data storage practices, experience with data modeling and applying knowledge of data warehousing concepts, experience building data pipelines and micro services, experience with Spark, Hive, Airflow and other streaming technologies to process large volumes of streaming data, and experience working on Amazon Web Services using EMR, Kinesis, RDS, S3, SQS and the like.
ALTERNATE REQUIREMENTS:
Master's degree in Computer Science, Business Analytics or a related field of study plus one (1) year of data engineering experience applying strong programming skills, such as Python, Java, and/or Scala, including experience writing SQL, structuring data, and data storage practices, experience with data modeling and applying knowledge of data warehousing concepts, experience building data pipelines and micro services, experience with Spark, Hive, Airflow and other streaming technologies to process large volumes of streaming data, and experience working on Amazon Web Services using EMR, Kinesis, RDS, S3, SQS and the like.
SPECIAL REQUIREMENTS:
Must pass technical interview.
May telecommute
OFFERED WAGE: $123,906.00 to $158,700.00 per year
#LI-DNI
Our perks & benefits
To support you at work and play, our perks and benefits include ample time off, an annual education budget, paid volunteer days, and so much more.
About Atlassian
The world’s best teams work better together with Atlassian. From medicine and space travel, to disaster response and pizza deliveries, Atlassian software products help teams all over the planet. At Atlassian, we're motivated by a common goal: to unleash the potential of every team.
We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines.
Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
To learn more about our culture and hiring process, explore our Candidate Resource Hub.
Atlassian
Maker of team collaboration tools @Jira , @JiraServiceMgmt , @Confluence , @Bitbucket , @Trello , @Statuspage , @OpsGenie , @AtlassianMarket ...
{{notification.msg}}