Article contents
Designing Scalable Data Pipelines with AWS: Best Practices and Architecture
Abstract
This article expounds on architectural patterns of constructing scalable data pipelines on Amazon Web Services, performing ingestion, processing, storage, and orchestration constructs, both for batch and streaming paradigms. An evaluation of some of the core AWS services, namely Kinesis, Lambda, Glue, EMR, Redshift, etc., facilitates deriving the patterns that can be effectively used in data transformations and delivery. It can be illustrated by a financial services case study that showcases the benefit of real-time transactions processing in providing personalized customer interactions, reducing operational expenditure, and increasing response level. The wider consequences encompass environmental positive effects as a result of energy-efficient infrastructures, economic positive effects as a result of the scale-capability based on reducing excesses, and even societal effects. Moving on, the combination of AI, edge computer resources, and sustainability-centered technologies is the way forward when it comes to contemporary systems of data.
Article information
Journal
Journal of Computer Science and Technology Studies
Volume (Issue)
7 (9)
Pages
743-749
Published
Copyright
Open access

This work is licensed under a Creative Commons Attribution 4.0 International License.