地情网站建设房产门户网站建设

张小明 2026/3/2 21:34:09
地情网站建设,房产门户网站建设,母婴网站模板,带有数据库的网站模板作者#xff1a; Billmay表妹 原文来源#xff1a; https://tidb.net/blog/c7445005 背景 本次实践围绕 OceanBase Binlog Server Canal Canal Adapter 实现 OB 增量数据到 TiDB 的同步#xff0c;核心流程涵盖搭建部署、配置调整、服务启动及同步验证等环节#x…作者 Billmay表妹 原文来源 https://tidb.net/blog/c7445005背景本次实践围绕OceanBase Binlog Server Canal Canal Adapter实现 OB 增量数据到 TiDB 的同步核心流程涵盖搭建部署、配置调整、服务启动及同步验证等环节具体如下搭建 OceanBase Binlog Server前提条件在部署 Binlog Server即obbinlog之前请确保满足以下条件OceanBase 集群已配置obconfig_url登录 OceanBase 集群后执行SHOW PARAMETERS LIKE obconfig_url;若未配置需手动安装obconfigserver并设置。具体方法参见 使用命令行部署 obconfigserver 。ODPOBProxy已部署且版本兼容Binlog 服务依赖 ODP 提供连接支持并要求 ODP 和 OceanBase 数据库版本在支持范围内。参见 版本发布记录 。网络互通性确保 Binlog Server 能访问 OceanBase 实例的 SQL/RPC 端口、元数据库端口同时 ODP 能访问binlog_service_ip。步骤一安装社区版安装方式以 yum 安装为例# 添加软件源后安装 yum install -y obbinlog安装完成后默认路径为/home/ds/oblogproxy。注意企业版用户需联系 OceanBase 技术支持获取安装包。详情见 Binlog 服务介绍手动解压部署可选也可下载 RPM 包后使用rpm2cpio解压至指定目录。步骤二初始化与启动节点首次启动时需要初始化元数据表后续节点无需重复初始化。启动后可通过以下命令查询节点状态SHOW NODES;详细说明见 节点管理OceanBase 租户如何订阅 Binlog Server步骤创建 Binlog 任务首先确认租户信息-- 查看集群名 SHOW PARAMETERS LIKE cluster; -- 获取 config_url SHOW PARAMETERS LIKE obconfig_url;然后在 Binlog Server 上执行CREATE BINLOG命令示例如下CREATE BINLOG INSTANCE binlog1 FOR demo.obmysql CLUSTER_URLhttp://1xx.xx.xx.1:8080/services?ActionObRootServiceInfo;参数说明${cluster_name}实际集群名${tenant_name}租户名称${config_url}通过SHOW PARAMETERS LIKE obconfig_url获取的 value 值参考文档 创建 Binlog 实例如何检查 OceanBase 实例是否正常生成 Binlog方法一通过日志检查查看obbinlog的运行日志通常位于/home/ds/oblogproxy/log/logproxy.log搜索关键错误或状态信息例如是否有拉取 clog 成功的日志。若出现资源不足报错如[error] selection_strategy.cpp(519): [ResourcesFilter] The resource threshold of node ... does not meet requirements请检查 CPU、内存、磁盘使用率是否超限。详见 问题排查手册方法二监控与诊断工具可使用obdiag工具进行一键诊断收集集群和 Binlog 相关状态信息。如何进入 OceanBase Binlog Server 的安装目录和 run 子目录并检查包含的文件默认安装路径社区版默认安装路径为/home/ds/oblogproxy进入 run 目录并查看文件cd /home/ds/oblogproxy/run ls -la常见子目录和文件包括bin/可执行程序如logproxy主进程conf/配置文件目录log/日志文件特别是logproxy.logrun/运行时产生的 PID 文件、socket 文件等lib/依赖库文件你可以查看当前运行的进程ps -ef | grep logproxy补充说明不适用场景OceanBase 的 Binlog 服务暂不适用于主备搭建和增量恢复等场景。参见 Binlog 服务介绍版本兼容性不同版本的obbinlog支持不同的 OceanBase 版本。如果版本不在支持范围可手动安装对应版本的obcdc依赖。参见 obbinlog V4.3.2总结操作项关键命令/路径检查 obconfig_urlSHOW PARAMETERS LIKE obconfig_url;创建 Binlog 实例CREATE BINLOG INSTANCE ... FOR \cluster.tenant CLUSTER_URL...安装目录/home/ds/oblogproxy日志路径/home/ds/oblogproxy/log/logproxy.log查询节点SHOW NODES;建议结合 OCP 或obd工具进行可视化管理和自动化部署提升运维效率。更多详情请参考官方文档部署指南问题排查手册安装 zookeeperkafka 也是基于 zk 的而这个包能直接把 zk 拉起来wget https://archive.apache.org/dist/kafka/3.9.0/kafka_2.13-3.9.0.tgz tar zxvf kafka_2.13-3.9.0.tgz cd kafka_2.13-3.9.0 bin/zookeeper-server-start.sh config/zookeeper.properties安装 javayum -y install java java --version结果输出#openjdk 11.0.21 2023-10-17 #OpenJDK Runtime Environment Bisheng (build 11.0.219) #OpenJDK 64-Bit Server VM Bisheng (build 11.0.219, mixed mode, sharing)安装 canal安装 canal.deployer-1.1.8.tar.gz 、canal.adapter-1.1.8.tar.gzwget https://github.com/alibaba/canal/releases/download/canal-1.1.8/canal.deployer-1.1.8.tar.gz wget https://github.com/alibaba/canal/releases/download/canal-1.1.8/canal.adapter-1.1.8.tar.gz修改 deployer 的配置需要修改两个配置文件 canal.properties 、instance.properties配置 canal.properties 文件vi /root/canal-for-ob-1.1.8/conf/canal.propertiescanal.properties 配置文件################################################# common argument ################################################# tcp bind ip canal.ip register ip to zookeeper canal.register.ip canal.port 11111 canal.metrics.pull.port 11112 canal instance user/passwd canal.user canal canal.passwd canal admin config #canal.admin.manager 127.0.0.1:8089 canal.admin.port 11110 canal.admin.user admin canal.admin.passwd admin auto register #canal.admin.register.auto true #canal.admin.register.cluster #canal.admin.register.name canal.zkServers 127.0.0.1:2181 --- 填上 zk 的地址 flush data to zk canal.zookeeper.flush.period 1000 canal.withoutNetty false tcp, kafka, rocketMQ, rabbitMQ, pulsarMQ canal.serverMode tcp --- 填上 tcp flush meta cursor/parse position to file canal.file.data.dir ${canal.conf.dir} canal.file.flush.period 1000 memory store RingBuffer size, should be Math.pow(2,n) canal.instance.memory.buffer.size 16384 memory store RingBuffer used memory unit size , default 1kb canal.instance.memory.buffer.memunit 1024 meory store gets mode used MEMSIZE or ITEMSIZE canal.instance.memory.batch.mode MEMSIZE canal.instance.memory.rawEntry true detecing config canal.instance.detecting.enable false #canal.instance.detecting.sql insert into retl.xdual values(1,now()) on duplicate key update xnow() canal.instance.detecting.sql select 1 canal.instance.detecting.interval.time 3 canal.instance.detecting.retry.threshold 3 canal.instance.detecting.heartbeatHaEnable false support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery canal.instance.transaction.size 1024 mysql fallback connected to new master should fallback times canal.instance.fallbackIntervalInSeconds 60 network config canal.instance.network.receiveBufferSize 16384 canal.instance.network.sendBufferSize 16384 canal.instance.network.soTimeout 30 binlog filter config canal.instance.filter.druid.ddl true canal.instance.filter.query.dcl false canal.instance.filter.query.dml false canal.instance.filter.query.ddl false canal.instance.filter.table.error false canal.instance.filter.rows false canal.instance.filter.transaction.entry false canal.instance.filter.dml.insert false canal.instance.filter.dml.update false canal.instance.filter.dml.delete false binlog format/image check canal.instance.binlog.format ROW,STATEMENT,MIXED canal.instance.binlog.image FULL,MINIMAL,NOBLOB binlog ddl isolation canal.instance.get.ddl.isolation false parallel parser config canal.instance.parser.parallel true concurrent thread number, default 60% available processors, suggest not to exceed Runtime.getRuntime().availableProcessors() #canal.instance.parser.parallelThreadSize 16 disruptor ringbuffer size, must be power of 2 canal.instance.parser.parallelBufferSize 256 table meta tsdb info canal.instance.tsdb.enable true canal.instance.tsdb.dir ${canal.file.data.dir:../conf}/${canal.instance.destination:} canal.instance.tsdb.url jdbc:h2:${canal.instance.tsdb.dir}/h2;CACHE_SIZE1000;MODEMYSQL; canal.instance.tsdb.dbUsername canal canal.instance.tsdb.dbPassword canal dump snapshot interval, default 24 hour canal.instance.tsdb.snapshot.interval 24 purge snapshot expire , default 360 hour(15 days) canal.instance.tsdb.snapshot.expire 360 ################################################# destinations ################################################# canal.destinations example conf root dir canal.conf.dir ../conf auto scan instance dir add/remove and start/stop instance canal.auto.scan true canal.auto.scan.interval 5 set this value to true means that when binlog pos not found, skip to latest. WARN: pls keep false in production env, or if you know what you want. canal.auto.reset.latest.pos.mode false canal.instance.tsdb.spring.xml classpath:spring/tsdb/h2-tsdb.xml #canal.instance.tsdb.spring.xml classpath:spring/tsdb/mysql-tsdb.xml canal.instance.global.mode spring canal.instance.global.lazy false canal.instance.global.manager.address ${canal.admin.manager} #canal.instance.global.spring.xml classpath:spring/memory-instance.xml #canal.instance.global.spring.xml classpath:spring/file-instance.xml canal.instance.global.spring.xml classpath:spring/default-instance.xml #canal.instance.global.spring.xml classpath:spring/ob-default-instance.xml ################################################## MQ Properties ################################################## aliyun ak/sk , support rds/mq canal.aliyun.accessKey canal.aliyun.secretKey canal.aliyun.uid canal.mq.flatMessage true canal.mq.canalBatchSize 50 canal.mq.canalGetTimeout 100 Set this value to cloud, if you want open message trace feature in aliyun. canal.mq.accessChannel local canal.mq.database.hash true canal.mq.send.thread.size 30 canal.mq.build.thread.size 8 ################################################## Kafka ################################################## kafka.bootstrap.servers 127.0.0.1:9092 kafka.acks all kafka.compression.type none kafka.batch.size 16384 kafka.linger.ms 1 kafka.max.request.size 1048576 kafka.buffer.memory 33554432 kafka.max.in.flight.requests.per.connection 1 kafka.retries 0 kafka.kerberos.enable false kafka.kerberos.krb5.file ../conf/kerberos/krb5.conf kafka.kerberos.jaas.file ../conf/kerberos/jaas.conf sasl demo kafka.sasl.jaas.config org.apache.kafka.common.security.scram.ScramLoginModule required \\n username\alice\ \\npasswordalice-secret\; kafka.sasl.mechanism SCRAM-SHA-512 kafka.security.protocol SASL_PLAINTEXT ################################################## RocketMQ ################################################## rocketmq.producer.group test rocketmq.enable.message.trace false rocketmq.customized.trace.topic rocketmq.namespace rocketmq.namesrv.addr 127.0.0.1:9876 rocketmq.retry.times.when.send.failed 0 rocketmq.vip.channel.enabled false rocketmq.tag ################################################## RabbitMQ ################################################## rabbitmq.host rabbitmq.virtual.host rabbitmq.exchange rabbitmq.username rabbitmq.password rabbitmq.queue rabbitmq.routingKey rabbitmq.deliveryMode ################################################## Pulsar ################################################## pulsarmq.serverUrl pulsarmq.roleToken pulsarmq.topicTenantPrefix 配置 instance.properties 文件vi /root/canal-for-ob-1.1.8/conf/example/instance.properties配置文件参数配置注意必选参数################################################# mysql serverId , v1.0.26 will autoGen canal.instance.mysql.slaveId0 enable gtid use true/false canal.instance.gtidonfalse rds oss binlog canal.instance.rds.accesskey canal.instance.rds.secretkey canal.instance.rds.instanceId position info canal.instance.master.address10.10.10.101:2883 --- obproxy 的地址 canal.instance.master.journal.name canal.instance.master.position canal.instance.master.timestamp canal.instance.master.gtid multi stream for polardbx canal.instance.multi.stream.onfalse ssl #canal.instance.master.sslModeDISABLED #canal.instance.master.tlsVersions #canal.instance.master.trustCertificateKeyStoreType #canal.instance.master.trustCertificateKeyStoreUrl #canal.instance.master.trustCertificateKeyStorePassword #canal.instance.master.clientCertificateKeyStoreType #canal.instance.master.clientCertificateKeyStoreUrl #canal.instance.master.clientCertificateKeyStorePassword table meta tsdb info canal.instance.tsdb.enabletrue #canal.instance.tsdb.urljdbc:mysql://127.0.0.1:3306/canal_tsdb #canal.instance.tsdb.dbUsernamecanal #canal.instance.tsdb.dbPasswordcanal #canal.instance.standby.address #canal.instance.standby.journal.name #canal.instance.standby.position #canal.instance.standby.timestamp #canal.instance.standby.gtid username/password canal.instance.dbUsernamerootob_user1#ob_test1 --- ob user canal.instance.dbPasswordPassworD123 --- ob password canal.instance.connectionCharset UTF-8 enable druid Decrypt database password canal.instance.enableDruidfalse #canal.instance.pwdPublicKeyMFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBALK4BUxdDltRRE5/zXpVEVPUgunvscYFtEip3pmLlhrWpacX7y7GCMo2/JM6LeHmiiNdH1FWgGCpUfircSwlWKUCAwEAAQ table regex canal.instance.filter.regex.\\.. table black regex canal.instance.filter.black.regexmysql\\.slave_.* table field filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2) #canal.instance.filter.fieldtest1.t_product:id/subject/keywords,test2.t_company:id/name/contact/ch table field black filter(format: schema1.tableName1:field1/field2,schema2.tableName2:field1/field2) #canal.instance.filter.black.fieldtest1.t_product:subject/product_image,test2.t_company:id/name/contact/ch mq config canal.mq.topicexample dynamic topic route by schema or table regex #canal.mq.dynamicTopicmytest1.user,topic2:mytest2\\..*,.*\\..* canal.mq.partition0 hash partition config #canal.mq.enableDynamicQueuePartitionfalse #canal.mq.partitionsNum3 #canal.mq.dynamicTopicPartitionNumtest.*:4,mycanal:6 #canal.mq.partitionHashtest.table:id^name,.\\.. #################################################启动 canal serversh /root/canal-for-ob-1.1.8/bin/startup.sh正常日志输出没有报错如有建议分析解决2025-12-11 17:18:50.995 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## set default uncaught exception handler 2025-12-11 17:18:51.001 [main] INFO com.alibaba.otter.canal.deployer.CanalLauncher - ## load canal configurations 2025-12-11 17:18:51.008 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## start the canal server. 2025-12-11 17:18:51.089 [main] INFO com.alibaba.otter.canal.deployer.CanalController - ## start the canal server[172.17.0.1(172.17.0.1):11111] 2025-12-11 17:18:52.038 [main] INFO com.alibaba.otter.canal.deployer.CanalStarter - ## the canal server is running now ......修改 canal adapter 的配置vi /root/canal-for-adapter-ob-1.1.8/conf/application.yml配置文件 canal adapterzaserver: port: 8081 spring: jackson: date-format: yyyy-MM-dd HH:mm:ss time-zone: GMT8 default-property-inclusion: non_null canal.conf: mode: tcp #tcp kafka rocketMQ rabbitMQ flatMessage: true zookeeperHosts: syncBatchSize: 1000 retries: -1 timeout: accessKey: secretKey: consumerProperties: # canal tcp consumer canal.tcp.server.host: 127.0.0.1:11111 --- canalserver 的地址 canal.tcp.zookeeper.hosts: canal.tcp.batch.size: 500 canal.tcp.username: canal.tcp.password: # kafka consumer # rocketMQ consumer # rabbitMQ consumer srcDataSources: defaultDS: url: jdbc:mysql://xx.xxx.xx.203:2883/db1?useUnicodetrue -- 源端的地址 username: rootob_user1#ob_test1 -- ob 用户名 password: PassworD123 -- ob 密码 canalAdapters: instance: example # canal instance Name or mq topic name groups: groupId: g1 outerAdapters: name: rdb key: mysql1 --- 这个名字要记住因为在后面的配置文件中要用到 properties: jdbc.driverClassName: com.mysql.jdbc.Driver jdbc.url: jdbc:mysql://xx.xxx.xxx.247:4000/db1?useUnicodetrue -- 目标端的地址 jdbc.username: tidb_test1 jdbc.password: PassworD123修改 mytest_user.yml 配置订阅同步配置vi /root/canal-for-adapter-ob-1.1.8/conf/rdb/mytest_user.ymlmytest_user.yml 配置参数dataSourceKey: defaultDS destination: example groupId: g1 outerAdapterKey: mysql1 --- 这个名字和前面的要一致 concurrent: true dbMapping: mirrorDb: true database: db1启动 canal-adaptersh /root/canal-for-adapter-ob-1.1.8/bin/startup.sh日志相关信息无报错2025-12-11 15:30:28.800 [SpringApplicationShutdownHook] INFO ru.yandex.clickhouse.ClickHouseDriver - Driver registered 2025-12-11 15:30:29.885 [SpringApplicationShutdownHook] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## stop the canal client adapters 2025-12-11 15:30:29.886 [pool-9-thread-1] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - destination example is waiting for adapters worker thread die! 2025-12-11 15:30:29.961 [pool-9-thread-1] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - destination example adapters worker thread dead! 2025-12-11 15:30:30.158 [pool-9-thread-1] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-2} closing ... 2025-12-11 15:30:30.162 [pool-9-thread-1] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-2} closed 2025-12-11 15:30:30.162 [pool-9-thread-1] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - destination example all adapters destroyed! 2025-12-11 15:30:30.162 [SpringApplicationShutdownHook] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - All canal adapters destroyed 2025-12-11 15:30:30.162 [SpringApplicationShutdownHook] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} closing ... 2025-12-11 15:30:30.162 [SpringApplicationShutdownHook] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} closed 2025-12-11 15:30:30.163 [SpringApplicationShutdownHook] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## canal client adapters are down. 2025-12-11 17:26:01.842 [main] INFO c.a.otter.canal.adapter.launcher.CanalAdapterApplication - Starting CanalAdapterApplication using Java xx.0.21 on tidbxxx.xxx.xxx.xxx.net with PID 3965171 (/root/canal-for-adapter-ob-1.1.8/lib/client-adapter.launcher-1.1.8.jar started by root in /root/canal-for-adapter-ob-1.1.8/bin) 2025-12-11 17:26:01.847 [main] INFO c.a.otter.canal.adapter.launcher.CanalAdapterApplication - No active profile set, falling back to 1 default profile: default 2025-12-11 17:26:02.300 [main] INFO org.springframework.cloud.context.scope.GenericScope - BeanFactory idd4f2b56b-aacd-327d-9217-5ce4cfc37805 2025-12-11 17:26:02.480 [main] INFO o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port(s): 8081 (http) 2025-12-11 17:26:02.487 [main] INFO org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler [http-nio-8081] 2025-12-11 17:26:02.487 [main] INFO org.apache.catalina.core.StandardService - Starting service [Tomcat] 2025-12-11 17:26:02.487 [main] INFO org.apache.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.75] 2025-12-11 17:26:02.570 [main] INFO o.a.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext 2025-12-11 17:26:02.570 [main] INFO o.s.b.w.s.context.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 692 ms 2025-12-11 17:26:02.806 [main] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-1} inited 2025-12-11 17:26:03.104 [main] INFO org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler [http-nio-8081] 2025-12-11 17:26:03.115 [main] INFO o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port(s): 8081 (http) with context path 2025-12-11 17:26:03.118 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## syncSwitch refreshed. 2025-12-11 17:26:03.118 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## start the canal client adapters. 2025-12-11 17:26:03.119 [main] INFO c.a.otter.canal.client.adapter.support.ExtensionLoader - extension classpath dir: /root/canal-for-adapter-ob-1.1.8/plugin 2025-12-11 17:26:03.166 [main] INFO c.a.otter.canal.client.adapter.rdb.config.ConfigLoader - ## Start loading rdb mapping config ... 2025-12-11 17:26:03.174 [main] INFO c.a.otter.canal.client.adapter.rdb.config.ConfigLoader - ## Rdb mapping config loaded 2025-12-11 17:26:03.198 [main] INFO com.alibaba.druid.pool.DruidDataSource - {dataSource-2} inited 2025-12-11 17:26:03.202 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Load canal adapter: rdb succeed 2025-12-11 17:26:03.207 [main] INFO c.alibaba.otter.canal.connector.core.spi.ExtensionLoader - extension classpath dir: /root/canal-for-adapter-ob-1.1.8/plugin 2025-12-11 17:26:03.221 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterLoader - Start adapter for canal-client mq topic: example-g1 succeed 2025-12-11 17:26:03.222 [main] INFO c.a.o.canal.adapter.launcher.loader.CanalAdapterService - ## the canal client adapters are running now ...... 2025-12-11 17:26:03.222 [Thread-3] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - Start to connect destination: example 2025-12-11 17:26:03.228 [main] INFO c.a.otter.canal.adapter.launcher.CanalAdapterApplication - Started CanalAdapterApplication in 1.697 seconds (JVM running for 2.164) 2025-12-11 17:26:03.354 [Thread-3] INFO c.a.otter.canal.adapter.launcher.loader.AdapterProcessor - Subscribe destination: example succeed 验证 Oceanbase 增量同步成功OB 插入数据验证增量数据同步mysql select version(); ------------------------------ | version() | ------------------------------ | 5.7.25-OceanBase_CE-v4.3.5.4 | ------------------------------ 1 row in set (0.00 sec) mysql use db1; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql show tables; --------------- | Tables_in_db1 | --------------- | t1 | --------------- 1 row in set (0.00 sec) mysql desc t1; ----------------------------------------------- | Field | Type | Null | Key | Default | Extra | ----------------------------------------------- | id | int(11) | NO | PRI | NULL | | | col1 | varchar(20) | YES | | NULL | | ----------------------------------------------- 2 rows in set (0.01 sec) mysql select * from t1; ---------- | id | col1 | ---------- | 1 | ccc | | 2 | ccc | | 3 | ccc | ---------- 3 rows in set (0.00 sec) mysql insert into \c mysql insert into t1 (id,col1) values (4,ddd); Query OK, 1 row affected (0.01 sec) mysql select * from t1; ---------- | id | col1 | ---------- | 1 | ccc | | 2 | ccc | | 3 | ccc | | 4 | ddd | ---------- 4 rows in set (0.00 sec) tidb 同步数据 mysql select version(); -------------------- | version() | -------------------- | 8.0.11-TiDB-v7.5.5 | -------------------- 1 row in set (0.00 sec) mysql use db1; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql select * from t1; ---------- | id | col1 | ---------- | 1 | ccc | | 2 | ccc | | 3 | ccc | | 4 | ddd | ---------- 4 rows in set (0.00 sec)注意事项版本兼容性确保obbinlog、OB 集群、ODP、Canal 版本匹配日志监控定期检查logproxy.log、Canal Server/Adapter 日志及时排查资源不足CPU/内存/磁盘或连接异常运维效率建议结合 OCP 或obd工具实现可视化管理和自动化部署。总结TiDB 与 OceanBase 作为国产分布式数据库的代表性产品均凭借各自技术特性成为运维 DBA 的优选工具。近年来越来越多的 OceanBase 用户选择 TiDB 作为下游数据库这一趋势反映了两者在功能、生态及用户需求适配性上的差异。OceanBase 用户选择 TiDB 作为下游的核心动因如技术栈简化与运维降本、TiDB 对业务友好性与开发适配、跨城同步与稳定性需求、活跃的社区与长期发展。随着企业对技术灵活性、运维效率及长期成本的关注TiDB 凭借兼容性、扩展性与生态优势正成为 OceanBase 用户拓展技术栈、降低绑定风险的优选下游数据库。这一趋势不仅体现了分布式数据库市场的多元化需求也验证了 TiDB 在复杂场景下的综合竞争力。
版权声明:本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!

西安优秀高端网站建设服务商discuz做资讯网站

Wan2.2-T2V-A14B在核电站安全运行原理讲解中的应用 你有没有想过,有一天,一句“请生成一个主蒸汽管道破裂的事故响应视频”,就能立刻看到压水堆里蒸汽喷涌、警报闪烁、操作员紧急干预的全过程?🔥 不是动画师加班一个月…

张小明 2026/1/14 1:29:25 网站建设

合肥仿站定制模板建站seo培训赚钱

Kotaemon框架为何成为GitHub热门项目? 在企业智能化浪潮席卷各行各业的今天,一个看似普通的开源对话框架——Kotaemon,悄然登上了GitHub趋势榜。它没有炫酷的界面,也不依赖某个明星模型,却在短短数月内吸引了大量开发者…

张小明 2026/1/14 1:27:24 网站建设

鸣蝉智能建站厦门网站建设云端网络

邮件发送问题排查全攻略 1. 引言 电子邮件是互联网上最古老且广泛使用的服务之一,与 DNS 不同,大多数人会直接频繁地使用邮件,并且能察觉到邮件出现的问题。无论是 DevOps 团队的成员、邮件服务器管理员、开发者还是普通用户,都可能遇到以下问题: - 我发的邮件收件人没…

张小明 2026/1/14 1:25:23 网站建设

东莞网站设计报价济南网站建设咨 询小七

还在为树莓派系统安装而头疼吗?🤔 每次面对SD卡烧录、系统选择、配置优化这些繁琐步骤,是不是感觉无从下手?别担心,今天我要分享的Raspberry Pi Imager工具,正是为解决这些痛点而生!作为官方出品…

张小明 2026/1/14 1:23:22 网站建设

网站正在建设中...郑州有做网站的公司没

视频看了几百小时还迷糊?关注我,几分钟让你秒懂!一、什么是“提交”(Commit)?在 Kafka 中,消费者消费消息后需要“提交偏移量”(offset commit),告诉 Kafka&a…

张小明 2026/1/14 1:21:21 网站建设

怎么做网站搜索引擎好的摄影网站

Java字节码解析神器:CFR反编译工具完全指南 【免费下载链接】cfr This is the public repository for the CFR Java decompiler 项目地址: https://gitcode.com/gh_mirrors/cf/cfr 在现代Java开发中,字节码解析技术已成为开发者必备的核心技能。C…

张小明 2026/1/14 1:19:20 网站建设