ClickHouse源碼筆記2:聚合流程的實現

来源:https://www.cnblogs.com/happenlee/archive/2020/07/17/13328977.html
-Advertisement-
Play Games

上篇筆記講到了聚合函數的實現並且帶大家看了聚合函數是如何註冊到ClickHouse之中的並被調用使用的。這篇筆記,筆者會續上上篇的內容,將剖析一把ClickHouse聚合流程的整體實現。 第二篇文章,我們來一起看看聚合流程的實現~~ 上車! 1.基礎知識的梳理 ClickHouse的實現介面 Blo ...


上篇筆記講到了聚合函數的實現並且帶大家看了聚合函數是如何註冊到ClickHouse之中的並被調用使用的。這篇筆記,筆者會續上上篇的內容,將剖析一把ClickHouse聚合流程的整體實現。
第二篇文章,我們來一起看看聚合流程的實現~~ 上車!

1.基礎知識的梳理

ClickHouse的實現介面
  • Block類
    前文我們聊到ClickHouse是一個列式存儲資料庫,在記憶體之中用IColumn介面來作為數據結構表示數據。 而Block則是這些列的集合,也就是說Block包含了一組列,而無數個Block就構成了我們通常理解的表了。
    在ClickHouse進行查詢之中,數據的最小處理單位是 Block 。由下麵代碼可以看到,Block就是由一組列以及列名對應列的偏移map組成的。
class Block
{
private:
    using Container = ColumnsWithTypeAndName;
    using IndexByName = std::map<String, size_t>;

    Container data;
    IndexByName index_by_name;

這是一個很重要的類,實現的也並不複雜。Block類作為ClickHouse的核心,後續的工作都是基於Block類展開的。

  • 抽象類IBlockInputStream
    由名字可以看出,IBlockInputStream是一個實現介面。
    這也同樣是一個十分重要的介面,ClickHouse的調用模型就建立在IBlockInputStream介面之上。該介面最為核心的就是方法便是read函數,它返回一個被對應Stream處理過的Block。
    想必看到這裡應該明白了,ClickHouse就是通過IBlockInputStream實現的火山模型,每一個不同的Stream處理不同的查詢邏輯,最後層層迭代,完成最終輸出流就是用戶需要的結果了。
    IBlockInputStream類還有一個孿生兄弟IBlockoutputStream,顧名思義,需要進行寫操作的時候就要用到它了。
class IBlockInputStream : public TypePromotion<IBlockInputStream>
{
    friend struct BlockStreamProfileInfo;

public:
    IBlockInputStream() { info.parent = this; }
    virtual ~IBlockInputStream() {}

    IBlockInputStream(const IBlockInputStream &) = delete;
    IBlockInputStream & operator=(const IBlockInputStream &) = delete;

    /// To output the data stream transformation tree (query execution plan).
    virtual String getName() const = 0;

    /** Get data structure of the stream in a form of "header" block (it is also called "sample block").
      * Header block contains column names, data types, columns of size 0. Constant columns must have corresponding values.
      * It is guaranteed that method "read" returns blocks of exactly that structure.
      */
    virtual Block getHeader() const = 0;

    virtual const BlockMissingValues & getMissingValues() const
    {
        static const BlockMissingValues none;
        return none;
    }

    /// If this stream generates data in order by some keys, return true.
    virtual bool isSortedOutput() const { return false; }

    /// In case of isSortedOutput, return corresponding SortDescription
    virtual const SortDescription & getSortDescription() const;

    /** Read next block.
      * If there are no more blocks, return an empty block (for which operator `bool` returns false).
      * NOTE: Only one thread can read from one instance of IBlockInputStream simultaneously.
      * This also applies for readPrefix, readSuffix.
      */
    Block read();
  • AggregatingBlockInputStream類
    終於引出我們的主角了,AggregatingBlockInputStream類,作為上面IBlockInputStream的子類,也就是我們今天要重點分析的類。
class AggregatingBlockInputStream : public IBlockInputStream
{
public:
    /** keys are taken from the GROUP BY part of the query
      * Aggregate functions are searched everywhere in the expression.
      * Columns corresponding to keys and arguments of aggregate functions must already be computed.
      */
    AggregatingBlockInputStream(const BlockInputStreamPtr & input, const Aggregator::Params & params_, bool final_)
        : params(params_), aggregator(params), final(final_)
    {
        children.push_back(input);
    }

    String getName() const override { return "Aggregating"; }

    Block getHeader() const override;

protected:
    Block readImpl() override;

    Aggregator::Params params;
    Aggregator aggregator;
    bool final;

    bool executed = false;

    std::vector<std::unique_ptr<TemporaryFileStream>> temporary_inputs;

     /** From here we will get the completed blocks after the aggregation. */
    std::unique_ptr<IBlockInputStream> impl;
};

首先看它的構造方法,參數有:

  • BlockInputStreamPtr: 這個很好理解,就是它的子流,也就是實際產生數據的流,後續的聚合計算將會在子流返回的結果上展開。
  • params: 聚合參數,這個參數十分重要。它記錄了那些key屬於聚合,調用那些聚合參數等核心信息。並且aggregator也就是執行聚合的類,也是通過該參數構造的,它是Aggregator的內部類。
  • final: 指明該Stream是否是最終結果,還是要繼續進行計算。

這裡最為核心的就是AggregatingBlockInputStream類通過繼承override對應的readImpl()的介面來實現對應的具體邏輯。AggregatingBlockInputStream類還有一個孿生兄弟:ParallelAggregatingBlockInputStream類,通過並行化來進一步加快聚合流程的執行效率。(通過筆者進行的測試,在簡單查詢聚合查詢下,並行化能夠提高近一倍的效率~~)

  • Aggregator::Params類
    Aggregator::Params類Aggregator的內部類。這個類是整個聚合過程之中最重要的類,查詢解析優化後生成聚合查詢的執行計劃。 而對應的執行計劃的參數都通過Aggregator::Params類來初始化,比如那些列要進行聚合,選取的聚合運算元等等,並傳遞給對應的Aggregator來實現對應的聚合邏輯。
 struct Params
    {
        /// Data structure of source blocks.
        Block src_header;
        /// Data structure of intermediate blocks before merge.
        Block intermediate_header;

        /// What to count.
        const ColumnNumbers keys;
        const AggregateDescriptions aggregates;
        const size_t keys_size;
        const size_t aggregates_size;

        /// The settings of approximate calculation of GROUP BY.
        const bool overflow_row;    /// Do we need to put into AggregatedDataVariants::without_key aggregates for keys that are not in max_rows_to_group_by.
        const size_t max_rows_to_group_by;
        const OverflowMode group_by_overflow_mode;



        /// Settings to flush temporary data to the filesystem (external aggregation).
        const size_t max_bytes_before_external_group_by;        /// 0 - do not use external aggregation.

        /// Return empty result when aggregating without keys on empty set.
        bool empty_result_for_aggregation_by_empty_set;

        VolumePtr tmp_volume;

        /// Settings is used to determine cache size. No threads are created.
        size_t max_threads;

        const size_t min_free_disk_space;
        Params(
            const Block & src_header_,
            const ColumnNumbers & keys_, const AggregateDescriptions & aggregates_,
            bool overflow_row_, size_t max_rows_to_group_by_, OverflowMode group_by_overflow_mode_,
            size_t group_by_two_level_threshold_, size_t group_by_two_level_threshold_bytes_,
            size_t max_bytes_before_external_group_by_,
            bool empty_result_for_aggregation_by_empty_set_,
            VolumePtr tmp_volume_, size_t max_threads_,
            size_t min_free_disk_space_)
            : src_header(src_header_),
            keys(keys_), aggregates(aggregates_), keys_size(keys.size()), aggregates_size(aggregates.size()),
            overflow_row(overflow_row_), max_rows_to_group_by(max_rows_to_group_by_), group_by_overflow_mode(group_by_overflow_mode_),
            group_by_two_level_threshold(group_by_two_level_threshold_), group_by_two_level_threshold_bytes(group_by_two_level_threshold_bytes_),
            max_bytes_before_external_group_by(max_bytes_before_external_group_by_),
            empty_result_for_aggregation_by_empty_set(empty_result_for_aggregation_by_empty_set_),
            tmp_volume(tmp_volume_), max_threads(max_threads_),
            min_free_disk_space(min_free_disk_space_)
        {
        }

        /// Only parameters that matter during merge.
        Params(const Block & intermediate_header_,
            const ColumnNumbers & keys_, const AggregateDescriptions & aggregates_, bool overflow_row_, size_t max_threads_)
            : Params(Block(), keys_, aggregates_, overflow_row_, 0, OverflowMode::THROW, 0, 0, 0, false, nullptr, max_threads_, 0)
        {
            intermediate_header = intermediate_header_;
        }
    };
  • Aggregator類
    顧名思義,這個是一個實際進行聚合工作展開的類。它最為核心的方法是下麵兩個函數:
    • execute函數:將輸入流的stream依照次序進行blcok迭代處理,將聚合的結果寫入result之中。
    • mergeAndConvertToBlocks函數:將聚合的結果轉換為輸入流,並通過輸入流的read函數將結果繼續返回給上一層。
      通過上面兩個函數的調用,我們就可以完成被聚合的數據輸入-》 數據聚合 -》 數據輸出的流程。具體的細節筆者會在下一章詳細的進行剖析。
class Aggregator
{
public:
    Aggregator(const Params & params_);

    /// Aggregate the source. Get the result in the form of one of the data structures.
    void execute(const BlockInputStreamPtr & stream, AggregatedDataVariants & result);

    using AggregateColumns = std::vector<ColumnRawPtrs>;
    using AggregateColumnsData = std::vector<ColumnAggregateFunction::Container *>;
    using AggregateColumnsConstData = std::vector<const ColumnAggregateFunction::Container *>;
    using AggregateFunctionsPlainPtrs = std::vector<IAggregateFunction *>;

    /// Process one block. Return false if the processing should be aborted (with group_by_overflow_mode = 'break').
    bool executeOnBlock(const Block & block, AggregatedDataVariants & result,
        ColumnRawPtrs & key_columns, AggregateColumns & aggregate_columns,    /// Passed to not create them anew for each block
        bool & no_more_keys);

    bool executeOnBlock(Columns columns, UInt64 num_rows, AggregatedDataVariants & result,
        ColumnRawPtrs & key_columns, AggregateColumns & aggregate_columns,    /// Passed to not create them anew for each block
        bool & no_more_keys);

    /** Convert the aggregation data structure into a block.
      * If overflow_row = true, then aggregates for rows that are not included in max_rows_to_group_by are put in the first block.
      *
      * If final = false, then ColumnAggregateFunction is created as the aggregation columns with the state of the calculations,
      *  which can then be combined with other states (for distributed query processing).
      * If final = true, then columns with ready values are created as aggregate columns.
      */
    BlocksList convertToBlocks(AggregatedDataVariants & data_variants, bool final, size_t max_threads) const;

    /** Merge several aggregation data structures and output the result as a block stream.
      */
    std::unique_ptr<IBlockInputStream> mergeAndConvertToBlocks(ManyAggregatedDataVariants & data_variants, bool final, size_t max_threads) const;
    ManyAggregatedDataVariants prepareVariantsToMerge(ManyAggregatedDataVariants & data_variants) const;

    /** Merge the stream of partially aggregated blocks into one data structure.
      * (Pre-aggregate several blocks that represent the result of independent aggregations from remote servers.)
      */
    void mergeStream(const BlockInputStreamPtr & stream, AggregatedDataVariants & result, size_t max_threads);

    using BucketToBlocks = std::map<Int32, BlocksList>;
    /// Merge partially aggregated blocks separated to buckets into one data structure.
    void mergeBlocks(BucketToBlocks bucket_to_blocks, AggregatedDataVariants & result, size_t max_threads);

    /// Merge several partially aggregated blocks into one.
    /// Precondition: for all blocks block.info.is_overflows flag must be the same.
    /// (either all blocks are from overflow data or none blocks are).
    /// The resulting block has the same value of is_overflows flag.
    Block mergeBlocks(BlocksList & blocks, bool final);

     std::unique_ptr<IBlockInputStream> mergeAndConvertToBlocks(ManyAggregatedDataVariants & data_variants, bool final, size_t max_threads) const;

    using CancellationHook = std::function<bool()>;

    /** Set a function that checks whether the current task can be aborted.
      */
    void setCancellationHook(const CancellationHook cancellation_hook);

    /// Get data structure of the result.
    Block getHeader(bool final) const;

2.聚合流程的實現

這裡我們就從上文提到的Aggregator::execute(const BlockInputStreamPtr & stream, AggregatedDataVariants & result)函數作為起點來梳理一下ClickHouse的聚合實現:

void Aggregator::execute(const BlockInputStreamPtr & stream, AggregatedDataVariants & result)
{
    Stopwatch watch;

    size_t src_rows = 0;
    size_t src_bytes = 0;

    /// Read all the data
    while (Block block = stream->read())
    {
        if (isCancelled())
            return;

        src_rows += block.rows();
        src_bytes += block.bytes();

        if (!executeOnBlock(block, result, key_columns, aggregate_columns, no_more_keys))
            break;
    }

由上述代碼可以看出,這裡就是依次讀取子節點流生成的Block,然後繼續調用executeOnBlock方法來執行聚合流程處理每一個Block的聚合。接著我們按圖索驥,繼續看下去,這個函數比較長,我們拆分成幾個部分,並且把無關緊要的代碼先去掉:這部分主要完成的工作就是將param之中指定的key列與聚合列的指針作為參數提取出來,並且和聚合函數一起封裝到AggregateFunctionInstructions的結構之中。

bool Aggregator::executeOnBlock(Columns columns, UInt64 num_rows, AggregatedDataVariants & result,
    ColumnRawPtrs & key_columns, AggregateColumns & aggregate_columns, bool & no_more_keys)
{
    /// `result` will destroy the states of aggregate functions in the destructor
    result.aggregator = this;

    /// How to perform the aggregation?
    if (result.empty())
    {
        result.init(method_chosen);
        result.keys_size = params.keys_size;
        result.key_sizes = key_sizes;
        LOG_TRACE(log, "Aggregation method: " << result.getMethodName());
    }

    for (size_t i = 0; i < params.aggregates_size; ++i)
        aggregate_columns[i].resize(params.aggregates[i].arguments.size());

    /** Constant columns are not supported directly during aggregation.
      * To make them work anyway, we materialize them.
      */
    Columns materialized_columns;

    /// Remember the columns we will work with
    for (size_t i = 0; i < params.keys_size; ++i)
    {
        materialized_columns.push_back(columns.at(params.keys[i])->convertToFullColumnIfConst());
        key_columns[i] = materialized_columns.back().get();

        if (!result.isLowCardinality())
        {
            auto column_no_lc = recursiveRemoveLowCardinality(key_columns[i]->getPtr());
            if (column_no_lc.get() != key_columns[i])
            {
                materialized_columns.emplace_back(std::move(column_no_lc));
                key_columns[i] = materialized_columns.back().get();
            }
        }
    }

    AggregateFunctionInstructions aggregate_functions_instructions(params.aggregates_size + 1);
    aggregate_functions_instructions[params.aggregates_size].that = nullptr;

    std::vector<std::vector<const IColumn *>> nested_columns_holder;
    for (size_t i = 0; i < params.aggregates_size; ++i)
    {
        for (size_t j = 0; j < aggregate_columns[i].size(); ++j)
        {
            materialized_columns.push_back(columns.at(params.aggregates[i].arguments[j])->convertToFullColumnIfConst());
            aggregate_columns[i][j] = materialized_columns.back().get();

            auto column_no_lc = recursiveRemoveLowCardinality(aggregate_columns[i][j]->getPtr());
            if (column_no_lc.get() != aggregate_columns[i][j])
            {
                materialized_columns.emplace_back(std::move(column_no_lc));
                aggregate_columns[i][j] = materialized_columns.back().get();
            }
        }

        aggregate_functions_instructions[i].arguments = aggregate_columns[i].data();
        aggregate_functions_instructions[i].state_offset = offsets_of_aggregate_states[i];
        auto that = aggregate_functions[i];
        /// Unnest consecutive trailing -State combinators
        while (auto func = typeid_cast<const AggregateFunctionState *>(that))
            that = func->getNestedFunction().get();
        aggregate_functions_instructions[i].that = that;
        aggregate_functions_instructions[i].func = that->getAddressOfAddFunction();

        if (auto func = typeid_cast<const AggregateFunctionArray *>(that))
        {
            /// Unnest consecutive -State combinators before -Array
            that = func->getNestedFunction().get();
            while (auto nested_func = typeid_cast<const AggregateFunctionState *>(that))
                that = nested_func->getNestedFunction().get();
            auto [nested_columns, offsets] = checkAndGetNestedArrayOffset(aggregate_columns[i].data(), that->getArgumentTypes().size());
            nested_columns_holder.push_back(std::move(nested_columns));
            aggregate_functions_instructions[i].batch_arguments = nested_columns_holder.back().data();
            aggregate_functions_instructions[i].offsets = offsets;
        }
        else
            aggregate_functions_instructions[i].batch_arguments = aggregate_columns[i].data();

        aggregate_functions_instructions[i].batch_that = that;
    }

將需要準備的參數準備好了之後,後續就通過按部就班的調用executeImpl(*result.NAME, result.aggregates_pool, num_rows, key_columns, aggregate_functions_instructions.data(),
no_more_keys, overflow_row_ptr)
聚合運算了。我們來看看它的實現,它是一個模板函數,內部通過調用了 executeImplBatch(method, state, aggregates_pool, rows, aggregate_instructions)來實現的,資料庫都會通過Batch的形式,一次性提交一組需要操作的數據來減少虛函數調用的開銷。

template <typename Method>
void NO_INLINE Aggregator::executeImpl(
    Method & method,
    Arena * aggregates_pool,
    size_t rows,
    ColumnRawPtrs & key_columns,
    AggregateFunctionInstruction * aggregate_instructions,
    bool no_more_keys,
    AggregateDataPtr overflow_row) const
{
    typename Method::State state(key_columns, key_sizes, aggregation_state_cache);

    if (!no_more_keys)
        executeImplBatch(method, state, aggregates_pool, rows, aggregate_instructions);
    else
        executeImplCase<true>(method, state, aggregates_pool, rows, aggregate_instructions, overflow_row);
}

那我們就繼續看下去,executeImplBatch同樣也是一個模板函數。

  • 首先,它構造了一個AggregateDataPtr的數組places,這裡是這就是後續我們實際聚合結果存放的地方。這個數據的長度也就是這個Batch的長度,也就是說,聚合結果的指針也作為一組列式的數據,參與到後續的聚合運算之中。
  • 接下來,通過一個for迴圈,依次調用state.emplaceKey,計算每列聚合key的hash值,進行分類,並且將對應結果依次和places對應。
  • 最後,通過一個for迴圈,調用聚合函數的addBatch方法,(這個函數我們在上一篇之中介紹過)。每個AggregateFunctionInstruction都有一個制定的places_offset和對應屬於進行聚合計算的value列,這裡通過一個for迴圈調用AddBatch,將places之中對應的數據指針和聚合value列進行聚合,最終形成所有的聚合計算的結果。

到這裡,整個聚合計算的核心流程算是完成了,後續就是將result的結果通過上面的convertToBlock的方式轉換為BlockStream流,繼續返回給上層的調用方。

template <typename Method>
void NO_INLINE Aggregator::executeImplBatch(
    Method & method,
    typename Method::State & state,
    Arena * aggregates_pool,
    size_t rows,
    AggregateFunctionInstruction * aggregate_instructions) const
{
    PODArray<AggregateDataPtr> places(rows);

    /// For all rows.
    for (size_t i = 0; i < rows; ++i)
    {
        AggregateDataPtr aggregate_data = nullptr;

        auto emplace_result = state.emplaceKey(method.data, i, *aggregates_pool);

        /// If a new key is inserted, initialize the states of the aggregate functions, and possibly something related to the key.
        if (emplace_result.isInserted())
        {
            /// exception-safety - if you can not allocate memory or create states, then destructors will not be called.
            emplace_result.setMapped(nullptr);

            aggregate_data = aggregates_pool->alignedAlloc(total_size_of_aggregate_states, align_aggregate_states);
            createAggregateStates(aggregate_data);

            emplace_result.setMapped(aggregate_data);
        }
        else
            aggregate_data = emplace_result.getMapped();

        places[i] = aggregate_data;
        assert(places[i] != nullptr);
    }

    /// Add values to the aggregate functions.
    for (AggregateFunctionInstruction * inst = aggregate_instructions; inst->that; ++inst)
    {
        if (inst->offsets)
            inst->batch_that->addBatchArray(rows, places.data(), inst->state_offset, inst->batch_arguments, inst->offsets, aggregates_pool);
        else
            inst->batch_that->addBatch(rows, places.data(), inst->state_offset, inst->batch_arguments, aggregates_pool);
    }

3. 小結

好了,到這裡也就把ClickHouse聚合流程的代碼梳理完了。
除了聚合計算外,其他的物理執行操作符也是同樣通過流的方式依次對接處理的,源碼閱讀的步驟也可以參照筆者的分析流程來參考。、
筆者是一個ClickHouse的初學者,對ClickHouse有興趣的同學,歡迎多多指教,交流。

4. 參考資料

官方文檔
ClickHouse源代碼


您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • 一 Nginx代理 1.1 Nginx代理概述 nginx是一款自由的、開源的、高性能的HTTP伺服器和反向代理伺服器。同時也是一個IMAP、POP3、SMTP代理伺服器。nginx可以作為一個HTTP伺服器進行網站的發佈處理,同時nginx可以作為反向代理進行負載均衡的實現。 1.2 Nginx代 ...
  • i.MXRT不僅僅是處理性能超強的MCU,也是安全等級極高的MCU。如果大家用過痞子衡開發的一站式安全啟動工具 NXP-MCUBootUtility,應該會從其用戶手冊3.3節中瞭解到i.MXRT支持的幾種安全啟動等級,其中HAB加密啟動方式和BEE/OTFAD加密啟動方式中都提及了一種神秘的密鑰 ... ...
  • 繼Golang學習系列第三天https://www.cnblogs.com/dongguangming/p/13311198.html:數組、切片、Map、結構體、指針、函數、介面類型、channel通道,今天開始學習golang操作資料庫,以PostgreSQL為例。 0. 安裝PostgreSQ ...
  • * 概念: 對錶中的數據進行限定,保證數據的正確性、有效性和完整性。 * 分類: 1. 主鍵約束:primary key 2. 非空約束:not null 3. 唯一約束:unique 4. 外鍵約束:foreign key * 非空約束:not null,值不能為null 1. 創建表時添加約束 ...
  • mongodb 基礎知識use user //進入user資料庫db.auth("username","password") //設置賬號密碼後可以用來鑒權, 先use進入資料庫,再運行否則報錯db.dropDatabase() //刪除當前所在的資料庫db.user.drop() //刪除當前數據 ...
  • 一、排除Top分頁法(自命名,非規範) 思想:所謂“排除Top分頁”,主要依靠“排除”和Top這個兩大核心步驟。首先查詢當前頁碼之前的數據,然後將該數據從總數據中排除掉,在從剩下的數據中獲取前N條記錄,就可以得到當前頁碼的數據。 舉例-分頁條件:每頁顯示2條記錄,查看第3頁 以SQLServer語法 ...
  • 在PostgreSQL資料庫之間進行跨庫操作的方式 dblink postgres_fdw 本文先說說dblink;dblink是一個支持從資料庫會話中連接到其他PostgreSQL資料庫的插件。在其他資料庫跨庫操作也是採用dblink的方式 一、安裝dblink PostgreSQL插件dblin ...
  • 什麼是大事務 運行時間比較長,長時間未提交的事務就可以稱為大事務 大事務產生的原因 操作的數據比較多 大量的鎖競爭 事務中有其他非DB的耗時操作 。。。 大事務造成的影響 併發情況下,資料庫連接池容易被撐爆 鎖定太多的數據,造成大量的阻塞和鎖超時 執行時間長,容易造成主從延遲 回滾所需要的時間比較長 ...
一周排行
    -Advertisement-
    Play Games
  • 1. 說明 /* Performs operations on System.String instances that contain file or directory path information. These operations are performed in a cross-pla ...
  • 視頻地址:【WebApi+Vue3從0到1搭建《許可權管理系統》系列視頻:搭建JWT系統鑒權-嗶哩嗶哩】 https://b23.tv/R6cOcDO qq群:801913255 一、在appsettings.json中設置鑒權屬性 /*jwt鑒權*/ "JwtSetting": { "Issuer" ...
  • 引言 集成測試可在包含應用支持基礎結構(如資料庫、文件系統和網路)的級別上確保應用組件功能正常。 ASP.NET Core 通過將單元測試框架與測試 Web 主機和記憶體中測試伺服器結合使用來支持集成測試。 簡介 集成測試與單元測試相比,能夠在更廣泛的級別上評估應用的組件,確認多個組件一起工作以生成預 ...
  • 在.NET Emit編程中,我們探討了運算操作指令的重要性和應用。這些指令包括各種數學運算、位操作和比較操作,能夠在動態生成的代碼中實現對數據的處理和操作。通過這些指令,開發人員可以靈活地進行算術運算、邏輯運算和比較操作,從而實現各種複雜的演算法和邏輯......本篇之後,將進入第七部分:實戰項目 ...
  • 前言 多表頭表格是一個常見的業務需求,然而WPF中卻沒有預設實現這個功能,得益於WPF強大的控制項模板設計,我們可以通過修改控制項模板的方式自己實現它。 一、需求分析 下圖為一個典型的統計表格,統計1-12月的數據。 此時我們有一個需求,需要將月份按季度劃分,以便能夠直觀地看到季度統計數據,以下為該需求 ...
  • 如何將 ASP.NET Core MVC 項目的視圖分離到另一個項目 在當下這個年代 SPA 已是主流,人們早已忘記了 MVC 以及 Razor 的故事。但是在某些場景下 SSR 還是有意想不到效果。比如某些靜態頁面,比如追求首屏載入速度的時候。最近在項目中回歸傳統效果還是不錯。 有的時候我們希望將 ...
  • System.AggregateException: 發生一個或多個錯誤。 > Microsoft.WebTools.Shared.Exceptions.WebToolsException: 生成失敗。檢查輸出視窗瞭解更多詳細信息。 內部異常堆棧跟蹤的結尾 > (內部異常 #0) Microsoft ...
  • 引言 在上一章節我們實戰了在Asp.Net Core中的項目實戰,這一章節講解一下如何測試Asp.Net Core的中間件。 TestServer 還記得我們在集成測試中提供的TestServer嗎? TestServer 是由 Microsoft.AspNetCore.TestHost 包提供的。 ...
  • 在發現結果為真的WHEN子句時,CASE表達式的真假值判斷會終止,剩餘的WHEN子句會被忽略: CASE WHEN col_1 IN ('a', 'b') THEN '第一' WHEN col_1 IN ('a') THEN '第二' ELSE '其他' END 註意: 統一各分支返回的數據類型. ...
  • 在C#編程世界中,語法的精妙之處往往體現在那些看似微小卻極具影響力的符號與結構之中。其中,“_ =” 這一組合突然出現還真不知道什麼意思。本文將深入剖析“_ =” 的含義、工作原理及其在實際編程中的廣泛應用,揭示其作為C#語法奇兵的重要角色。 一、下劃線 _:神秘的棄元符號 下劃線 _ 在C#中並非 ...