你提到的tidb_server_memory_limit (实例级内存限制)和tidb_mem_quota_query (v6.5 + 是会话级内存限制),两者都会触发超内存报错:
1 个赞
实际需求就是全字段宽表导出,where条件决定的能走的索引很清晰,条件所覆盖的索引的数据量大概一两万到五六万以内,就是分批拉取才分的时间段
实际情况没有并发,单线程查询
八十多列吧,确实很宽
去掉ORDER BY应该就不怎么吃内存了
执行计划里哪一步是排序?
用explain analyze实际执行下语句看看计划信息
explain analyze和analyze什么区别
看不出执行计划有啥问题啊
| Projection_29 | 4673.65 | 40462 | root | time:1.78s, loops:41, RU:2532.364466, Concurrency:5 | bsppr.xpost.postid, bsppr.xpost.facetid, bsppr.xpost.entryid, bsppr.xpost.title, bsppr.xpost.url, bsppr.xpost.abstract, bsppr.xpost.click, bsppr.xpost.reply, bsppr.xpost.repost, bsppr.xpost.praise, bsppr.xpost.collect, bsppr.xpost.watch, bsppr.xpost.wordscount, bsppr.xpost.keywordcount, bsppr.xpost.siteid, bsppr.xpost.domain, bsppr.xpost.author, bsppr.xpost.author_id, bsppr.xpost.posttime, bsppr.xpost.include_t, bsppr.xpost.type, bsppr.xpost.source, bsppr.xpost.hidden, bsppr.xpost.sourcetype, bsppr.xpost.crisis_post, bsppr.xpost.ontop, bsppr.xpost.type_rank, bsppr.xpost.noise_rank, bsppr.xpost.device, bsppr.xpost.is_origin, bsppr.xpost.is_top, bsppr.xpost.media_type, bsppr.xpost.author_type, bsppr.xpost.content_type, bsppr.xpost.client_type, bsppr.xpost.industry, bsppr.xpost.tags, bsppr.xpost.post_type, bsppr.xpost.type_reason, bsppr.xpost.update_time, bsppr.xpost.origin_source, bsppr.xpost.media_id, bsppr.xpost.w_level, bsppr.xpost.sid, bsppr.xpost.location, bsppr.xpost.is_comment, bsppr.xpost.pos_type_rank, bsppr.xpost.text, bsppr.xpost.spider_time, bsppr.xpost.process_time, bsppr.xpost.tidb_in_time, bsppr.xpost.is_yqt, bsppr.xpost.fans_num, bsppr.xpost.gender, bsppr.xpost.author_location, bsppr.xpost.verify_info, bsppr.xpost.author_tags, bsppr.xpost.is_ocr, bsppr.xpost.signature, bsppr.xpost.api_call_num, bsppr.xpost.api_call_time, bsppr.xpost.extra_str_1, bsppr.xpost.extra_str_2, bsppr.xpost.extra_str_3, bsppr.xpost.extra_str_4, bsppr.xpost.extra_str_5, bsppr.xpost.extra_str_6, bsppr.xpost.extra_str_7, bsppr.xpost.extra_str_8, bsppr.xpost.extra_text_1, bsppr.xpost.extra_int_1, bsppr.xpost.extra_int_2, bsppr.xpost.extra_int_3, bsppr.xpost.extra_float_1, bsppr.xpost.extra_float_2, bsppr.xpost.extra_float_3, bsppr.xpost.media_industry, bsppr.xpost.cluster_id, bsppr.xpost.parent_url, bsppr.xpost.personal_sourcetype, bsppr.xpost.personal_domain, bsppr.xpost.server_info | 55.6 MB | N/A | |
|---|---|---|---|---|---|---|---|---|
| └─IndexLookUp_33 | 4673.65 | 40462 | root | time:1.78s, loops:41, index_task: {total_time: 92.1ms, fetch_handle: 85.9ms, build: 6.14ms, wait: 21.6µs}, table_task: {total_time: 5.08s, num: 6, concurrency: 5}, next: {wait_index: 7.55ms, wait_table_lookup_build: 263.3ms, wait_table_lookup_resp: 1.23s} | 127.9 MB | N/A | ||
| ├─IndexRangeScan_30(Build) | 5200.41 | 43089 | cop[tikv] | table:xpost, index:idx_facetid_update_time(facetid, update_time) | time:83.8ms, loops:45, cop_task: {num: 9, max: 23.4ms, min: 2.14ms, avg: 10.1ms, p95: 23.4ms, max_proc_keys: 17376, p95_proc_keys: 17376, tot_proc: 78.8ms, tot_wait: 405µs, rpc_num: 9, rpc_time: 90.3ms, copr_cache_hit_ratio: 0.00, build_task_duration: 40.4µs, max_distsql_concurrency: 1}, tikv_task:{proc max:21ms, min:1ms, avg: 8.78ms, p80:18ms, p95:21ms, iters:77, tasks:9}, scan_detail: {total_process_keys: 43089, total_process_keys_size: 2369895, total_keys: 79136, get_snapshot_time: 204.2µs, rocksdb: {delete_skipped_count: 80134, key_skipped_count: 159261, block: {cache_hit_count: 166, read_count: 266, read_byte: 3.40 MB, read_time: 8.77ms}}} | range:[691 2026-01-02 03:06:39,691 2026-01-02 06:13:18), keep order:true | N/A | N/A |
| └─Selection_32(Probe) | 4673.65 | 40462 | cop[tikv] | time:3.51s, loops:49, cop_task: {num: 5461, max: 401.5ms, min: 0s, avg: 17.2ms, p95: 130.9ms, max_proc_keys: 86, p95_proc_keys: 34, tot_proc: 8.92s, tot_wait: 12.3s, rpc_num: 1357, rpc_time: 1m33.7s, copr_cache_hit_ratio: 0.00, build_task_duration: 1.53s, max_distsql_concurrency: 15, max_extra_concurrency: 105, store_batch_num: 4104}, tikv_task:{proc max:31ms, min:0s, avg: 1.64ms, p80:2ms, p95:6ms, iters:5791, tasks:5461}, scan_detail: {total_process_keys: 43089, total_process_keys_size: 129490650, total_keys: 43103, get_snapshot_time: 9.87s, rocksdb: {key_skipped_count: 42, block: {cache_hit_count: 669970, read_count: 44149, read_byte: 332.2 MB, read_time: 3.43s}}} | or(lt(bsppr.xpost.tidb_in_time, 2026-01-02 00:00:00.000000), gt(bsppr.xpost.tidb_in_time, 2026-01-03 03:59:59.000000)) | N/A | N/A | |
| └─TableRowIDScan_31 | 5200.41 | 43089 | cop[tikv] | table:xpost | tikv_task:{proc max:31ms, min:0s, avg: 1.64ms, p80:2ms, p95:6ms, iters:5791, tasks:5461} | keep order:false | N/A | N/A |
ALTER TABLE xpost ADD INDEX idx_facetid_updatetime_tidbin (facetid, update_time, tidb_in_time);
联合索引如果其中一个字段是范围状态的话下一个字段就会失效的,所以你这个索引应该是无用的
评估和实际的行数差别有点大了吧?统计准吗?