A data pipeline is a series of steps that takes large data sets and creates usable results from them. At the beginning of a data pipeline, a data set might be pulled from a database, a distributed file system, or a Kafka topic. Throughout a data pipeline, different data sets are joined, filtered, and statistically
The post Great Expectations: Data Pipeline Testing with Abe Gong appeared first on Software Engineering Daily.
📆 2020-02-14 11:00 / ⌛ 00:57:54
📆 2020-02-13 11:00 / ⌛ 00:56:51
📆 2020-02-12 11:00 / ⌛ 00:51:14
📆 2020-02-11 11:00 / ⌛ 00:56:51
📆 2020-02-10 11:00 / ⌛ 01:08:36