Gør som tusindvis af andre bogelskere
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.Du kan altid afmelde dig igen.
Introducing the "Data Warehousing: Optimizing Data Storage and Retrieval for Business Success" bundle!Unlock the full potential of your data with this comprehensive collection of four essential books:1. Data Warehousing Fundamentals: A Beginner's Guide· Dive into the foundational principles of data warehousing and learn how to build a solid framework for storing and managing your organization's data.· Understand the importance of data modeling and gain insights into the extraction, transformation, and loading (ETL) processes essential for efficient data management.2. Mastering Data Modeling for Data Warehousing· Take your data modeling skills to the next level with advanced techniques for conceptual, logical, and dimensional modeling.· Learn how to design scalable and efficient data warehouses that meet the evolving needs of your organization.3. Advanced ETL Techniques for Data Warehousing Optimization· Optimize your ETL processes and streamline data extraction, transformation, and loading for maximum efficiency.· Explore advanced techniques such as incremental loading and change data capture (CDC) to ensure the smooth operation of your data warehouse.4. Big Data Analytics: Harnessing the Power of Data Warehousing for Experts· Unlock the transformative potential of big data analytics and gain actionable insights to drive informed decision-making.· Discover how to leverage your data warehouse for real-time data processing, predictive modeling, and more.With this bundle, you'll gain the knowledge and skills needed to optimize your data storage and retrieval processes, empowering you to harness the power of data for business success. Whether you're a beginner looking to build a solid foundation or an expert seeking advanced strategies, this bundle has something for everyone. Don't miss out on this opportunity to revolutionize your approach to data warehousing and take your business to new heights!
The relentless ascent of the cloud computing paradigm has garnered focused attention in the framework of industry 4.0. Nowadays, Cloud computing services are being used by 70% of business organizations, except 10% more organizations contrived to utilize it. As a result, 4000 data centers are the estimated need over the next decade to accompany 400 million servers. In 2013, the projected energy utilization of United States data centers was 91 billion kWh of electricity, equivalent to a yearly yield of 34 huge (500-megawatt) coal-¿red power plants, sücient to provide electricity to all households in New York City for two years. Consequently, in the next few years, this is expected to escalate to approximately 140 billion kilowatts per hour; it emits almost 150 million carbon emission metrics annually. Speci¿cally, Amazon expends nearly half its administration ¿nancial plan to control and cool the server farms. Additionally, excessive power utilization increases system temperature and escalates every 10¿C tends to double the failure rate of electronic devices. The data center's power utilization will foresee (3- 13)% of worldwide electricity usage in 2030. The worldwide power utilization of the Hyper-Scale Data Centers (HSDCs) is 5%, while the Small and Medium-Scale Data Centers (SMSDCs) consumed the rest of the 95%. The U.S established nearly 5.17 million servers (40%) in SMSDCs. In recent days, the SMSDCs furnished with high computing utilities tend to in¿uence server power utilization. Therefore, this calls for identifying the monitoring and control measures to curtail power utilization and minimize the carbon footprint in SMSDCs. A cloud data center is associated with a group of connected Physical Machines (PMs) or hosts used by organizations for network processing, remote storage, and access to enormous data. The data centers are the backbone of the cloud environments. The virtualization technique plays a signi¿cant role in the data centers - facilitates sharing resources among customers through Virtual Machines (VMs). The IaaS layer uses virtualization technology to create VMs, consolidate work-loads, and facilitate the delivery of computational resources to end-users. The industry 4.0 environment encompasses the extensive growth of big data applications and the pervasive Internet of Things technology. Data centers are central to the current modern industrial business world. Therefore, almost 80 % of business organizations are contriving to transform to cloud computing technology, promising to enhance the business functionality. Extensive enhancements in the SMSDC infrastructure comprise a diverse set of connected devices that disseminate resources to the end users.
Take your data engineering skills to the next level by learning how to utilize Scala and functional programming to create continuous and scheduled pipelines that ingest, transform, and aggregate dataKey FeaturesTransform data into a clean and trusted source of information for your organization using ScalaBuild streaming and batch-processing pipelines with step-by-step explanationsImplement and orchestrate your pipelines by following CI/CD best practices and test-driven development (TDD)Purchase of the print or Kindle book includes a free PDF eBookBook DescriptionMost data engineers know that performance issues in a distributed computing environment can easily lead to issues impacting the overall efficiency and effectiveness of data engineering tasks. While Python remains a popular choice for data engineering due to its ease of use, Scala shines in scenarios where the performance of distributed data processing is paramount. This book will teach you how to leverage the Scala programming language on the Spark framework and use the latest cloud technologies to build continuous and triggered data pipelines. You'll do this by setting up a data engineering environment for local development and scalable distributed cloud deployments using data engineering best practices, test-driven development, and CI/CD. You'll also get to grips with DataFrame API, Dataset API, and Spark SQL API and its use. Data profiling and quality in Scala will also be covered, alongside techniques for orchestrating and performance tuning your end-to-end pipelines to deliver data to your end users. By the end of this book, you will be able to build streaming and batch data pipelines using Scala while following software engineering best practices.What you will learnSet up your development environment to build pipelines in ScalaGet to grips with polymorphic functions, type parameterization, and Scala implicitsUse Spark DataFrames, Datasets, and Spark SQL with ScalaRead and write data to object storesProfile and clean your data using DeequPerformance tune your data pipelines using ScalaWho this book is forThis book is for data engineers who have experience in working with data and want to understand how to transform raw data into a clean, trusted, and valuable source of information for their organization using Scala and the latest cloud technologies. Table of ContentsScala Essentials for Data EngineersEnvironment SetupAn Introduction to Apache Spark and Its APIs - DataFrame, Dataset, and Spark SQLWorking with DatabasesObject Stores and Data LakesUnderstanding Data TransformationData Profiling and Data QualityTest-Driven Development, Code Health, and MaintainabilityCI/CD with GitHubData Pipeline OrchestrationPerformance TuningBuilding Batch Pipelines Using Spark and ScalaBuilding Streaming Pipelines Using Spark and Scala
Genealogies document relationships between persons involved in historical events. Information about the events is parsed from communications from the past. This book explores a way to organize information from multiple communications into a trustworthy representation of a genealogical history of the modern world. The approach defines metrics for evaluating the consistency, correctness, closure, connectivity, completeness, and coherence of a genealogy. The metrics are evaluated using a 312,000-person research genealogy that explores the common ancestors of the royal families of Europe. A major result is that completeness is defined by a genealogy symmetry property driven by two exponential processes, the doubling of the number of potential ancestors each generation, and the rapid growth of lineage coalescence when the number of potential ancestors exceeds the available population. A genealogy expands from an initial root person to a large number of lineages, which then coalesce into a small number of progenitors. Using the research genealogy, candidate progenitors for persons of Western European descent are identified. A unifying ancestry is defined to which historically notable persons can be linked.
The European Summer School in Logic, Language and Information (ESSLLI) is organized every year by the Association for Logic, Language and Information (FoLLI) in different sites around Europe. The papers cover vastly dierent topics, but each fall in the intersection of the three primary topics of ESSLLI: Logic, Language and Computation. The 13 papers presented in this volume have been selected among 81 submitted papers over the years 2019, 2020 and 2021. The ESSLLI Student Session is an excellent venue for students to present their work and receive valuable feedback from renowned experts in their respective fields. The Student Session accepts submissions for three different tracks: Language and Computation (LaCo), Logic and Computation (LoCo), and Logic and Language (LoLa).
This book focuses on the widespread use of deep neural networks and their various techniques in session-based recommender systems (SBRS). It presents the success of using deep learning techniques in many SBRS applications from different perspectives. For this purpose, the concepts and fundamentals of SBRS are fully elaborated, and different deep learning techniques focusing on the development of SBRS are studied.The book is well-modularized, and each chapter can be read in a stand-alone manner based on individual interests and needs. In the first chapter of the book, definitions and concepts related to SBRS are reviewed, and a taxonomy of different SBRS approaches is presented, where the characteristics and applications of each class are discussed separately. The second chapter starts with the basic concepts of deep learning and the characteristics of each model. Then, each deep learning model, along with its architecture and mathematical foundations, is introduced. Next, chapter 3 analyses different approaches of deep discriminative models in session-based recommender systems. In the fourth chapter, session-based recommender systems that benefit from deep generative neural networks are discussed. Subsequently, chapter 5 discusses session-based recommender systems using advanced/hybrid deep learning models. Eventually, chapter 6 reviews different learning-to-rank methods focusing on information retrieval and recommender system domains. Finally, the results of the investigations and findings from the research review conducted throughout the book are presented in a conclusive summary.This book aims at researchers who intend to use deep learning models to solve the challenges related to SBRS. The target audience includes researchers entering the field, graduate students specializing in recommender systems, web data mining, information retrieval, or machine/deep learning, and advanced industry developers working on recommender systems.
This book constitutes the proceedings of the First International Conference, CINS 2023, held in Dubai, United Arab Emirates, from October 18 to 20, 2023.The 11 full papers included in this volume were carefully reviewed and selected from 130 submissions. This volume discusses contemporary challenges within computing systems and the utilization of intelligent approaches to improve computing methodologies, data processing capabilities, and the application of these intelligent techniques. The book also addresses several topics pertaining to networks, including security, network data processing, networks that transcend boundaries, device heterogeneity, and advancements in networks connected to the Internet of Things, software-defined networks, cloud computing, and intelligent networks.
This book explores provenance, the study and documentation of how things come to be. Traditionally defined as the origins, source, or ownership of an artifact, provenance today is not limited to historical domains. It can be used to describe what did happen (retrospective provenance), what could happen (subjunctive provenance), or what will happen (prospective provenance). Provenance information is ubiquitous and abundant; for example, a wine label that details the winery, type of grape, and country of origin tells a provenance story that determines the value of the bottle. This book presents select standards used in organizing provenance information and provides concrete examples on how to implement them. Provenance transcends disciplines, and this book is intended for anyone who is interested in documenting workflows and recipes. The goal is to empower readers to frame and answer provenance questions for their own work. Provenance is increasingly important in computational workflows and e-sciences and addresses the need for a practical introduction to provenance documentation with simple-to-use multi-disciplinary examples and activities. Case studies and examples address the creation of basic records using a variety of provenance metadata models, and the differences between PROV, ProvONE, and PREMIS are discussed. Readers will gain an understanding of the uses of provenance metadata in different domains and sectors in order to make informed decisions on their use. Documenting provenance can be a daunting challenge, and with clear examples and explanations, the task will be less intimidating to explore provenance needs.
Leverage BigQuery to understand and prepare your data to ensure that it's accurate, reliable, and ready for analysis and modelingKey FeaturesUse mock datasets to explore data with the BigQuery web UI, bq CLI, and BigQuery API in the Cloud consoleMaster optimization techniques for storage and query performance in BigQueryEngage with case studies on data exploration and preparation for advertising, transportation, and customer support dataPurchase of the print or Kindle book includes a free PDF eBookBook DescriptionData professionals encounter a multitude of challenges such as handling large volumes of data, dealing with data silos, and the lack of appropriate tools. Datasets often arrive in different conditions and formats, demanding considerable time from analysts, engineers, and scientists to process and uncover insights. The complexity of the data life cycle often hinders teams and organizations from extracting the desired value from their data assets. Data Exploration and Preparation with BigQuery offers a holistic solution to these challenges.The book begins with the basics of BigQuery while covering the fundamentals of data exploration and preparation. It then progresses to demonstrate how to use BigQuery for these tasks and explores the array of big data tools at your disposal within the Google Cloud ecosystem.The book doesn't merely offer theoretical insights; it's a hands-on companion that walks you through properly structuring your tables for query efficiency and ensures adherence to data preparation best practices. You'll also learn when to use Dataflow, BigQuery, and Dataprep for ETL and ELT workflows. The book will skillfully guide you through various case studies, demonstrating how BigQuery can be used to solve real-world data problems.By the end of this book, you'll have mastered the use of SQL to explore and prepare datasets in BigQuery, unlocking deeper insights from data.What you will learnAssess the quality of a dataset and learn best practices for data cleansingPrepare data for analysis, visualization, and machine learningExplore approaches to data visualization in BigQueryApply acquired knowledge to real-life scenarios and design patternsSet up and organize BigQuery resourcesUse SQL and other tools to navigate datasetsImplement best practices to query BigQuery datasetsGain proficiency in using data preparation tools, techniques, and strategiesWho this book is forThis book is for data analysts seeking to enhance their data exploration and preparation skills using BigQuery. It guides anyone using BigQuery as a data warehouse to extract business insights from large datasets. A basic understanding of SQL, reporting, data modeling, and transformations will assist with understanding the topics covered in this book.Table of ContentsIntroducing BigQuery and Its ComponentsBigQuery Organization and DesignExploring Data in BigQueryLoading and Transforming DataQuerying BigQuery DataExploring Data with NotebooksFurther Exploring and Visualizing DataAn Overview of Data Preparation ToolsCleansing and Transforming DataBest Practices for Data Preparation, Optimization, and Cost ControlHands-On Exercise - Analyzing Advertising DataHands-On Exercise Analyzing Transportation DataHands-On Exercise - Analyzing Customer Support DataSummary and Future Directions
This book presents a comprehensive overview of Natural Language Interfaces to Databases (NLIDBs), an indispensable tool in the ever-expanding realm of data-driven exploration and decision making. After first demonstrating the importance of the field using an interactive ChatGPT session, the book explores the remarkable progress and general challenges faced with real-world deployment of NLIDBs. It goes on to provide readers with a holistic understanding of the intricate anatomy, essential components, and mechanisms underlying NLIDBs and how to build them. Key concepts in representing, querying, and processing structured data as well as approaches for optimizing user queries are established for the reader before their application in NLIDBs is explored. The book discusses text to data through early relevant work on semantic parsing and meaning representation before turning to cutting-edge advancements in how NLIDBs are empowered to comprehend and interpret human languages. Various evaluation methodologies, metrics, datasets and benchmarks that play a pivotal role in assessing the effectiveness of mapping natural language queries to formal queries in a database and the overall performance of a system are explored. The book then covers data to text, where formal representations of structured data are transformed into coherent and contextually relevant human-readable narratives. It closes with an exploration of the challenges and opportunities related to interactivity and its corresponding techniques for each dimension, such as instances of conversational NLIDBs and multi-modal NLIDBs where user input is beyond natural language. This book provides a balanced mixture of theoretical insights, practical knowledge, and real-world applications that will be an invaluable resource for researchers, practitioners, and students eager to explore the fundamental concepts of NLIDBs.
Take your machine learning skills to the next level by mastering databricks and building robust ML pipeline solutions for future ML innovationsKey FeaturesLearn to build robust ML pipeline solutions for databricks transitionMaster commonly available features like AutoML and MLflowLeverage data governance and model deployment using MLflow model registryPurchase of the print or Kindle book includes a free PDF eBookBook DescriptionUnleash the potential of databricks for end-to-end machine learning with this comprehensive guide, tailored for experienced data scientists and developers transitioning from DIY or other cloud platforms. Building on a strong foundation in Python, Practical Machine Learning on Databricks serves as your roadmap from development to production, covering all intermediary steps using the databricks platform.You'll start with an overview of machine learning applications, databricks platform features, and MLflow. Next, you'll dive into data preparation, model selection, and training essentials and discover the power of databricks feature store for precomputing feature tables. You'll also learn to kickstart your projects using databricks AutoML and automate retraining and deployment through databricks workflows.By the end of this book, you'll have mastered MLflow for experiment tracking, collaboration, and advanced use cases like model interpretability and governance. The book is enriched with hands-on example code at every step. While primarily focused on generally available features, the book equips you to easily adapt to future innovations in machine learning, databricks, and MLflow.What you will learnTransition smoothly from DIY setups to databricksMaster AutoML for quick ML experiment setupAutomate model retraining and deploymentLeverage databricks feature store for data prepUse MLflow for effective experiment trackingGain practical insights for scalable ML solutionsFind out how to handle model drifts in production environmentsWho this book is forThis book is for experienced data scientists, engineers, and developers proficient in Python, statistics, and ML lifecycle looking to transition to databricks from DIY clouds. Introductory Spark knowledge is a must to make the most out of this book, however, end-to-end ML workflows will be covered. If you aim to accelerate your machine learning workflows and deploy scalable, robust solutions, this book is an indispensable resource.Table of ContentsML Process and ChallengesOverview of ML on DatabricksUtilizing Feature Store Understanding MLflow ComponentsCreate a Baseline Model for Bank Customer Churn Prediction Using AutoMLModel Versioning and WebhooksModel Deployment ApproachesAutomating ML Workflows Using the Databricks JobsModel Drift Detection for Our Churn Prediction Model and RetrainingCI/CD to Automate Model Retraining and Re-Deployment.
This book constitutes the refereed proceedings of the 5th International Conference on Science of Cyber Security, SciSec 2023, held in Melbourne, VIC, Australia, during July 11¿14, 2023. The 21 full papers presented together with 6 short papers were carefully reviewed and selected from 60 submissions. The papers are organized in the topical sections named: ¿ACDroid: Detecting Collusion Applications on Smart Devices; Almost Injective and Invertible Encodings for Jacobi Quartic Curves; Decompilation Based Deep Binary-Source Function Matching.
This book sheds light on state-of-the-art theories for more challenging outfit compatibility modeling scenarios. In particular, this book presents several cutting-edge graph learning techniques that can be used for outfit compatibility modeling. Due to its remarkable economic value, fashion compatibility modeling has gained increasing research attention in recent years. Although great efforts have been dedicated to this research area, previous studies mainly focused on fashion compatibility modeling for outfits that only involved two items and overlooked the fact that each outfit may be composed of a variable number of items. This book develops a series of graph-learning based outfit compatibility modeling schemes, all of which have been proven to be effective over several public real-world datasets. This systematic approach benefits readers by introducing the techniques for compatibility modeling of outfits that involve a variable number of composing items. To deal with the challenging task of outfit compatibility modeling, this book provides comprehensive solutions, including correlation-oriented graph learning, modality-oriented graph learning, unsupervised disentangled graph learning, partially supervised disentangled graph learning, and metapath-guided heterogeneous graph learning. Moreover, this book sheds light on research frontiers that can inspire future research directions for scientists and researchers.
This book provides a coherent and complete overview of various Question Answering (QA) systems. It covers three main categories based on the source of the data that can be unstructured text (TextQA), structured knowledge graphs (KBQA), and the combination of both. Developing a QA system usually requires using a combination of various important techniques, including natural language processing, information retrieval and extraction, knowledge graph processing, and machine learning.After a general introduction and an overview of the book in Chapter 1, the history of QA systems and the architecture of different QA approaches are explained in Chapter 2. It starts with early close domain QA systems and reviews different generations of QA up to state-of-the-art hybrid models. Next, Chapter 3 is devoted to explaining the datasets and the metrics used for evaluating TextQA and KBQA. Chapter 4 introduces the neural and deep learning models used in QA systems. This chapter includes the required knowledge of deep learning and neural text representation models for comprehending the QA models over text and QA models over knowledge base explained in Chapters 5 and 6, respectively. In some of the KBQA models the textual data is also used as another source besides the knowledge base; these hybrid models are studied in Chapter 7. In Chapter 8, a detailed explanation of some well-known real applications of the QA systems is provided. Eventually, open issues and future work on QA are discussed in Chapter 9.This book delivers a comprehensive overview on QA over text, QA over knowledge base, and hybrid QA systems which can be used by researchers starting in this field. It will help its readers to follow the state-of-the-art research in the area by providing essential and basic knowledge.
This book provides a new model to explore discoverability and enhance the meaning of information. The authors have coined the term epidata, which includes items and circumstances that impact the expression of the data in a document, but are not part of the ordinary process of retrieval systems. Epidata affords pathways and points to details that cast light on proximities that might otherwise go unknown. In addition, epidata are clues to mis-and dis-information discernment. There are many ways to find needed information; however, finding the most useable information is not an easy task. The book explores the uses of proximity and the concept of epidata that increases the probability of finding functional information. The authors sketch a constellation of proximities, present examples of attempts to accomplish proximity, and provoke a discussion of the role of proximity in the field. In addition, the authors suggest that proximity is a thread between retrieval constructs based on known topics, predictable relations, and types of information seeking that lie outside constructs such as browsing, stumbling, encountering, detective work, art making, and translation.
This book constitutes the refereed proceedings of the 16th International Conference on Similarity Search and Applications, SISAP 2023, held in A Coruña, Spain, during October 9¿11, 2023.The 16 full papers and 4 short papers included in this book were carefully reviewed and selected from 33 submissions. They were organized in topical sections as follows: similarity queries, similarity measures, indexing and retrieval, data management, feature extraction, intrinsic dimensionality, efficient algorithms, similarity in machine learning and data mining.
This book constitutes the post-conference proceedings of the satellite events held at the 20th Extended Semantic Web Conference, ESWC 2023, held in Hersonissos, Greece, during May 28¿June 1, 2023.The 50 full papers included in this book were carefully reviewed and selected from 109 submissions. They were organized in sections as follows: Posters and Demos, Industry, and PhD Symposium.
"Insights from the Algorithm: A Machine Learning Story" is an engaging and informative journey into the fascinating world of machine learning and artificial intelligence. This narrative weaves together a rich tapestry of concepts, techniques, and practical applications, offering a comprehensive understanding of this rapidly evolving field.This compelling narrative takes readers on a captivating exploration of the inner workings of algorithms, data analytics, and predictive modeling. It delves deep into the ever-expanding universe of machine learning, offering insights into its foundational principles, methodologies, and real-world applications.Throughout the pages of this book, you'll embark on a quest to unravel the secrets behind the algorithms that power recommendation systems, autonomous vehicles, and natural language processing. You'll discover how machine learning algorithms are designed to detect intricate patterns within data, enabling them to make predictions and take intelligent actions.With a keen focus on demystifying complex technical concepts, "Insights from the Algorithm" serves as a beacon for both beginners and seasoned data scientists. It elucidates key topics such as supervised and unsupervised learning, deep learning, neural networks, and the ethics of AI in an accessible and engaging manner.The narrative goes beyond mere technicalities and offers a thought-provoking exploration of the societal and ethical implications of machine learning. It discusses the responsible use of AI and the impact of algorithms on decision-making, privacy, and bias, providing a comprehensive understanding of the challenges and opportunities presented by this revolutionary technology.In "Insights from the Algorithm," you'll find a treasure trove of case studies and real-world examples that illustrate the transformative power of machine learning. From healthcare and finance to marketing and autonomous robotics, this narrative highlights how machine learning is reshaping industries and offering innovative solutions to complex problems.Join us on this captivating journey as we unveil the intricate world of machine learning and discover how algorithms are not just lines of code but powerful tools that unlock new dimensions of understanding and enable intelligent actions. This book is your gateway to the future, where data-driven insights and algorithmic intelligence redefine what's possible. Whether you're an enthusiast, a student, or a professional in the field, this narrative offers a comprehensive, accessible, and inspiring guide to the incredible world of machine learning.
This book provides a principled data-driven framework that progressively constructs, enriches, and applies taxonomies without leveraging massive human annotated data. Traditionally, people construct domain-specific taxonomies by extensive manual curations, which is time-consuming and costly. In today's information era, people are inundated with the vast amounts of text data. Despite their usefulness, people haven't yet exploited the full power of taxonomies due to the heavy curation needed for creating and maintaining them. To bridge this gap, the authors discuss automated taxonomy discovery and exploration, with an emphasis on label-efficient machine learning methods and their real-world usages. Taxonomy organizes entities and concepts in a hierarchy way. It is ubiquitous in our daily life, ranging from product taxonomies used by online retailers, topic taxonomies deployed by news outlets and social media, as well as scientific taxonomies deployed by digital libraries across various domains. When properly analyzed, these taxonomies can play a vital role for science, engineering, business intelligence, policy design, e-commerce, and more. Intuitive examples are used throughout enabling readers to grasp concepts more easily.
This book constitutes the refereed proceedings of the 22nd International TRIZ Future Conference on Automated Invention for Smart Industries, TFC 2022, which took place in Warsaw, Poland, in September 2022; the event was sponsored by IFIP WG 5.4.The 39 full papers presented were carefully reviewed and selected from 43 submissions. They are organized in the following thematic sections: New perspectives of TRIZ; AI in systematic innovation; systematic innovations supporting IT and AI; TRIZ applications; TRIZ education and ecosystem.
This book constitutes the refereed proceedings of the 14th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, IC3K 2022, held in Valletta, Malta, during October 24¿26, 2022.The 14 full papers included in this book were carefully reviewed and selected from 127 submissions. They were organized in topical sections as follows: Knowledge Discovery and Information Retrieval; Knowledge Engineering and Ontology Development; and Knowledge Management and Information Systems
Supercharge and deploy Amazon Redshift Serverless, train and deploy machine learning models using Amazon Redshift ML, and run inference queries at scaleKey Features:Leverage supervised learning to build binary classification, multi-class classification, and regression modelsLearn to use unsupervised learning using the K-means clustering methodMaster the art of time series forecasting using Redshift MLPurchase of the print or Kindle book includes a free PDF eBookBook Description:Amazon Redshift Serverless enables organizations to run petabyte-scale cloud data warehouses quickly and in a cost-effective way, enabling data science professionals to efficiently deploy cloud data warehouses and leverage easy-to-use tools to train models and run predictions. This practical guide will help developers and data professionals working with Amazon Redshift data warehouses to put their SQL knowledge to work for training and deploying machine learning models.The book begins by helping you to explore the inner workings of Redshift Serverless as well as the foundations of data analytics and types of data machine learning. With the help of step-by-step explanations of essential concepts and practical examples, you'll then learn to build your own classification and regression models. As you advance, you'll find out how to deploy various types of machine learning projects using familiar SQL code, before delving into Redshift ML. In the concluding chapters, you'll discover best practices for implementing serverless architecture with Redshift.By the end of this book, you'll be able to configure and deploy Amazon Redshift Serverless, train and deploy machine learning models using Amazon Redshift ML, and run inference queries at scale.What You Will Learn:Utilize Redshift Serverless for data ingestion, data analysis, and machine learningCreate supervised and unsupervised models and learn how to supply your own custom parametersDiscover how to use time series forecasting in your data warehouseCreate a SageMaker endpoint and use that to build a Redshift ML model for remote inferenceFind out how to operationalize machine learning in your data warehouseUse model explainability and calculate probabilities with Amazon Redshift MLWho this book is for:Data scientists and machine learning developers working with Amazon Redshift who want to explore its machine-learning capabilities will find this definitive guide helpful. A basic understanding of machine learning techniques and working knowledge of Amazon Redshift is needed to make the most of this book.
This book constitutes the proceedings of the 23rd International TRIZ Future Conference on Towards AI-Aided Invention and Innovation, TFC 2023, which was held in Offenburg, Germany, during September 12¿14, 2023. The event was sponsored by IFIP WG 5.4.The 43 full papers presented in this book were carefully reviewed and selected from 80 submissions. The papers are divided into the following topical sections: AI and TRIZ; sustainable development; general vision of TRIZ; TRIZ impact in society; and TRIZ case studies.
Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.
Ved tilmelding accepterer du vores persondatapolitik.