Compare commits

...

52 Commits

Author SHA1 Message Date
zeaslity
06b044dabc 新增Harbor管理功能和优化文件操作工具
- 在Base.go中添加Harbor安装、启动、停止和卸载子命令
- 实现Harbor本地安装流程,包括配置文件修改和容器检查
- 在Excutor.go中改进命令执行错误处理
- 在FileUtils.go中新增MoveFileToAnother方法,优化文件移动逻辑
- 修复DockerCompose本地安装命令的文件路径和移动方法
2025-03-07 10:58:20 +08:00
zeaslity
5c2325b7e4 优化Docker和SSH命令,增强系统配置和安装流程
- 更新Docker安装命令,默认版本升级到20.10.24
- 新增Docker配置命令,支持主节点配置daemon.json
- 修改SSH相关命令,增加输出换行以提高可读性
- 优化Zsh插件安装,使用完整git路径
- 改进系统配置中的主机名生成逻辑,增加内网IP最后一段标识
- 完善操作系统类型检测,扩展Ubuntu类型系统识别
- 调整包管理操作中的输出格式
2025-03-05 16:47:14 +08:00
zeaslity
2e96490926 新增doris的部署内容 2025-03-05 14:46:36 +08:00
zeaslity
3cf5e369c1 Refine Command Execution and Test Script
- Updated HardCodeCommandExecutor logging to use more descriptive log prefix
- Modified run_test.sh to streamline proxy and firewall configuration commands
- Removed commented test shell script execution
- Simplified proxy installation commands
2025-03-03 15:36:57 +08:00
zeaslity
7c92512a7e Improve Xray Proxy Management and Configuration
- Added 'remove' subcommand for Xray proxy
- Enhanced VMESS installation with V2rayNG config generation
- Updated Xray installation process with improved error handling
- Modified vmess template to separate Clash and V2rayNG configurations
- Fixed command existence check in PackageOperator
2025-03-01 00:32:53 +08:00
zeaslity
db3d259a0a Enhance Proxy and Configuration Management
- Implemented comprehensive VMESS proxy installation with dynamic configuration
- Added support for Xray installation and configuration generation
- Introduced hostname normalization with city, architecture, and IP-based naming
- Updated proxy commands to include VMESS and VLESS subcommands
- Improved configuration management with NormalizeConfig method
- Enhanced logging and error handling for proxy-related operations
2025-02-28 23:58:38 +08:00
zeaslity
5c39bd7594 Enhance Docker Installation and Management Commands
- Improved Docker installation process for Ubuntu systems
- Added support for dynamic Docker version detection
- Enhanced Docker local and online installation commands
- Implemented more robust Docker removal functionality
- Updated Docker installation to use system-specific package sources
- Added better error handling and logging for Docker operations
- Refined Docker service startup and configuration checks
2025-02-28 17:45:12 +08:00
zeaslity
bffb643a56 [agent-wdd] 小修改 2025-02-28 11:34:57 +08:00
zeaslity
b28c6462f1 Enhance Network Interface Detection and Configuration Management
- Implemented robust network interface detection with `GetInterfaces()` function
- Added validation for network interface names using regex patterns
- Updated `Network` struct to improve YAML tag formatting
- Modified `Gather()` and `SaveConfig()` methods to streamline network configuration
- Removed redundant `SaveConfig()` calls in various methods
- Added comprehensive network interface name validation logic
2025-02-28 11:19:29 +08:00
zeaslity
c10554c218 [agent-wdd] 小小新增部分内容 2025-02-27 17:19:36 +08:00
zeaslity
b6cc5abc63 Refactor Disk and Memory Size Formatting with Centralized Utility Function
- Extracted common size formatting logic to a new utility function `HumanSize` in utils package
- Removed duplicate size formatting code from Disk and Memory configurations
- Updated Disk and Memory modules to use the centralized size formatting utility
- Uncommented and implemented disk usage calculation in Disk configuration
- Improved code readability and maintainability by centralizing size conversion logic
2025-02-27 15:15:55 +08:00
zeaslity
8fc55e2e28 Enhance Download Functionality with Proxy and Progress Tracking
- Implemented advanced download utility with proxy support for SOCKS5 and HTTP protocols
- Added progress tracking with human-readable file size and download percentage
- Updated go.mod and go.sum to include new dependencies for proxy and networking
- Created flexible proxy client generation for different proxy types
- Improved error handling and logging in download process
2025-02-27 15:06:40 +08:00
zeaslity
6de29630b5 123 2025-02-27 14:52:57 +08:00
zeaslity
3ad2533550 Enhance Help Command with Recursive Command Listing
- Replaced default help function with a custom implementation
- Added `printAllCommands` function to recursively list available commands
- Improved command help display with indentation and description
- Supports nested command hierarchy visualization
2025-02-27 14:42:34 +08:00
zeaslity
7a703dccc4 Add Help Command to Agent WDD CLI
- Implemented a new 'help' command in the root command
- Configured the help command to use the default usage template
- Integrated the help command into the root command's available subcommands
2025-02-27 14:26:07 +08:00
zeaslity
72bc56b5e5 Enhance Zsh and Config Commands, Update Network Configuration
- Implemented comprehensive Zsh installation command with multiple network scenarios
- Added 'config show' subcommand to display agent configuration
- Updated version command to print version information
- Modified Network configuration to clarify internet connectivity status
- Improved download utility with additional file existence checks
- Updated agent-wdd rules and documentation
2025-02-27 14:20:05 +08:00
zeaslity
16c041e3eb Add base system configuration commands for agent-wdd
- Implemented new base commands for system configuration:
  * swap: Disable system swap
  * selinux: Disable SELinux
  * firewall: Stop and disable firewalld and ufw
  * sysconfig: Modify system sysctl configuration
  * ssh: Add SSH-related subcommands (key, port, config)

- Updated Config.go to initialize ConfigCache with default values
- Added new utility functions in FileUtils.go for file content manipulation
- Extended Excutor.go with HardCodeCommandExecutor method
2025-02-27 10:57:58 +08:00
zeaslity
e8f0e0d4a9 123 2025-02-26 17:49:03 +08:00
zeaslity
c751c21871 大量的更新 2025-02-26 17:44:03 +08:00
zeaslity
b8170e00d4 愉快的使用cursor 2025-02-26 09:25:24 +08:00
zeaslity
60a1849207 新增cursor的配置 2025-02-25 17:01:14 +08:00
zeaslity
5a8aa53d64 新增大量内容 2025-02-25 16:58:47 +08:00
zeaslity
ce0395ae66 [agent-wdd] 完成Excecutor和Operator部分,完成base tools部分 2025-02-14 17:17:55 +08:00
zeaslity
dabf63f10f [agent-wdd] 基本完成Info部分的整理 2025-02-13 15:29:26 +08:00
zeaslity
e826b55240 [agent-wdd] 完成自定义log部分;完成info network部分; 项目结构基本完成 2025-02-11 17:27:41 +08:00
zeaslity
66dca6a080 Merge remote-tracking branch 'origin/local-ss' into local-ss 2025-02-10 15:09:24 +08:00
zeaslity
1f6dcc3ef0 小更新 2025-02-10 15:09:19 +08:00
zeaslity
46fd5f7d97 小更新 2025-02-10 15:09:14 +08:00
zeaslity
b8f0b14852 [agent][wdd] - 初始化项目 2025-02-10 15:07:44 +08:00
zeaslity
a0811d62e7 [agent-wdd]-重构agent的bastion模式 2025-02-10 11:15:35 +08:00
zeaslity
5ca5689083 Merge remote-tracking branch 'origin/local-ss' into local-ss 2025-02-10 09:16:26 +08:00
zeaslity
5bfcb98e03 [agent][deploy] - DEMO项目 2025-02-10 09:16:20 +08:00
zeaslity
0d3bb30eed [agent-go]-简化Agent 剔除Harbor K8s Image相关的内容 2025-01-22 15:09:43 +08:00
zeaslity
4edaf9f35a [agent][deploy] - 辽宁应急项目 2025-01-10 15:21:23 +08:00
zeaslity
4135195430 [agent][deploy] - 辽宁应急生成 2025-01-10 14:52:16 +08:00
zeaslity
9f4631af91 [agent-deploy]-新增辽宁应急厅 2025-01-10 14:49:18 +08:00
zeaslity
af3e058af4 [agent][deploy] - a lot 2024-12-18 17:40:33 +08:00
zeaslity
fa0e4a0734 [agent-deploy]-甘肃项目 2024-12-06 17:38:33 +08:00
zeaslity
5a3c53969c [agent-deploy]-甘肃项目 2024-12-06 17:37:59 +08:00
zeaslity
8f5f85826c [agent-operator]-无聊的更新内容 2024-12-02 18:04:13 +08:00
zeaslity
88cb1e1bb1 [agent][deploy] - iot part 2024-11-22 16:37:17 +08:00
zeaslity
07cf7a12b7 修改bastion的bug 2024-11-17 11:50:16 +08:00
zeaslity
4b1712b67f Merge branch 'main' into local-ss 2024-11-17 11:49:00 +08:00
zeaslity
9b026a2ec7 Merge branch 'main' of https://gitea.107421.xyz/zeaslity/ProjectOctopus into main 2024-11-17 11:46:28 +08:00
zeaslity
724ef6424c 【agent-go】- 修改文件打开指令 2024-11-17 11:46:19 +08:00
zeaslity
332cc1d9eb [deploy]- bug fix for devflight 2024-11-12 11:38:50 +08:00
zeaslity
bf45eeb735 Merge branch 'main' into local-ss
# Conflicts:
#	agent-common/real_project/CmiiImageListConfig.go
#	agent-operator/CmiiK8sOperator_test.go
#	agent-operator/ImageSyncOperator_test.go
2024-11-11 17:23:56 +08:00
zeaslity
f901992d92 [deploy]-适配新电脑 2024-11-11 17:22:30 +08:00
zeaslity
98b0e14304 [Deploy]-新增资阳GA 2024-11-04 10:21:19 +08:00
zeaslity
82bdcca604 增加Makefile的模式 2024-10-30 16:34:55 +08:00
zeaslity
d5cbaded65 [agent][deploy] - iot part 2024-10-22 17:17:39 +08:00
zeaslity
327d12f789 [agent][deploy] - jxejpt;; fix srs part 2024-09-27 14:34:09 +08:00
434 changed files with 295494 additions and 43263 deletions

View File

@@ -0,0 +1,60 @@
---
description: 构建agent-wdd的特定上下文的规则
globs: *.go
---
# 你是一个精通golang的编程大师熟练掌握github.com/spf13/cobra框架能够构建出非常现代的cli工具
@.cursorignore 请忽略这些目录下的文件
# 整个项目的架构结构如下
1. base 服务器基础操作 相关的功能存放于 [Base.go](mdc:agent-wdd/cmd/Base.go)
1. docker docker相关的操作
1. online 使用网络安装特定版本的docker
2. remove 卸载docker
3. local 从本地docker二级制文件安装docker
2. dockercompose dockercompose相关的操作
1. online 使用网络安装特定版本的dockercompose
2. remove 卸载dockercompose
3. local 从本地安装dockercompose
3. tools 利用本机的yumapt等从网络安装常用的软件
4. ssh ssh相关的操作
1. key 安装特定的ssh-key
2. port 修改sshd的端口为特定端口
3. config 修改sshd的配置为特定配置
5. swap 关闭本机的swap缓存
6. selinux 关闭本机的selinux相关内容
7. firewall 关闭本机的防火墙相关的设置
8. sysconfig 修改主机sysconfig相关的内容
2. zsh zsh相关的内容 自动安装配置zsh [Zsh.go](mdc:agent-wdd/cmd/Zsh.go)
3. proxy 主机代理相关的内容
1. xray xray相关的内容
1. install 安装最新版本的xray
2. local 从本机安装xray
3. upgrade 卸载xray
2. vmess 一键设置vmess的代理模式
3. vless 一键设置vless的代理模式
4. sysconfig 修改主机proxy相关的内核参数
4. acme acme相关的内容
1. install 安装acme.sh
2. cert 为特定域名申请证书文件
3. list 列出本地存在的证书
5. wdd
1. host 更新所有的hosts
2. resolve 更新主机的resolve
3. agent 此部分才是octopus-agent的内容
1. install
2. upgrade
3. remove
4. upgrade 更新octopus-wdd自身
6. security
1. ssh
7. info 获取主机相关的信息并且保存至config文件 实现在 [Info.go](mdc:agent-wdd/cmd/Info.go)
1. cpu cpu相关的信息 [CPU.go](mdc:agent-wdd/config/CPU.go)
2. os 操作系统相关的信息 [OS.go](mdc:agent-wdd/config/OS.go)
3. mem mem相关的信息 [Memory.go](mdc:agent-wdd/config/Memory.go)
4. disk disk相关的信息 [Disk.go](mdc:agent-wdd/config/Disk.go)
5. network 网络相关的内容 [Network.go](mdc:agent-wdd/config/Network.go)
6. all 主机全部的信息
8. version 打印octopus-agent的构建版本信息
9. config octopus-wdd使用的配置文件 文件
1. show

9
.cursorignore Normal file
View File

@@ -0,0 +1,9 @@
# Add directories or file patterns to ignore during indexing (e.g. foo/ or *.csv)
agent-deploy/
message_pusher/
port_forwarding/
server/
server-go/
socks_txthinking/
source/

View File

@@ -0,0 +1,15 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="CMII镜像同步-11.8-ARM" type="GoTestRunConfiguration" factoryName="Go Test">
<module name="ProjectOctopus" />
<target name="root@192.168.11.8:22" />
<working_directory value="$PROJECT_DIR$/agent-operator" />
<kind value="PACKAGE" />
<package value="wdd.io/agent-operator" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$" />
<option name="build_on_remote_target" value="true" />
<framework value="gotest" />
<pattern value="^\QTestPullFromEntityAndSyncConditionally\E$" />
<method v="2" />
</configuration>
</component>

View File

@@ -0,0 +1,15 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="CmiiUpdater-35.70" type="GoTestRunConfiguration" factoryName="Go Test" singleton="false">
<module name="ProjectOctopus" />
<target name="wdd-dev-35.70" />
<working_directory value="$PROJECT_DIR$/agent-operator" />
<kind value="PACKAGE" />
<package value="wdd.io/agent-operator" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$" />
<option name="build_on_remote_target" value="true" />
<framework value="gotest" />
<pattern value="^\QTestUpdateCmiiDeploymentImageTag\E$" />
<method v="2" />
</configuration>
</component>

View File

@@ -0,0 +1,15 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="Cmii镜像同步-35.70" type="GoTestRunConfiguration" factoryName="Go Test">
<module name="ProjectOctopus" />
<target name="wdd-dev-35.70" />
<working_directory value="$PROJECT_DIR$/agent-operator" />
<kind value="PACKAGE" />
<package value="wdd.io/agent-operator" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$" />
<option name="build_on_remote_target" value="true" />
<framework value="gotest" />
<pattern value="^\QTestPullFromEntityAndSyncConditionally\E$" />
<method v="2" />
</configuration>
</component>

View File

@@ -0,0 +1,15 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="DCU全部CMII镜像" type="GoTestRunConfiguration" factoryName="Go Test">
<module name="ProjectOctopus" />
<target name="wdd-dev-35.70" />
<working_directory value="$PROJECT_DIR$/agent-operator" />
<kind value="PACKAGE" />
<package value="wdd.io/agent-operator" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$" />
<option name="build_on_remote_target" value="true" />
<framework value="gotest" />
<pattern value="^\QTestPullFromEntityAndSyncConditionally\E$" />
<method v="2" />
</configuration>
</component>

View File

@@ -0,0 +1,15 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="DEMO更新-3570" type="GoTestRunConfiguration" factoryName="Go Test" singleton="false">
<module name="ProjectOctopus" />
<target name="wdd-dev-35.70" />
<working_directory value="$PROJECT_DIR$/agent-operator" />
<kind value="PACKAGE" />
<package value="wdd.io/agent-operator" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$" />
<option name="build_on_remote_target" value="true" />
<framework value="gotest" />
<pattern value="^\QTestUpdateCmiiImageTagFromNameTagMap\E$" />
<method v="2" />
</configuration>
</component>

View File

@@ -1,14 +1,15 @@
<component name="ProjectRunConfigurationManager"> <component name="ProjectRunConfigurationManager">
<configuration default="false" name="TestUpdateCmiiDeploymentImageTag in wdd.io/agent-operator" <configuration default="false" name="DEMO重启-3570" type="GoTestRunConfiguration" factoryName="Go Test">
type="GoTestRunConfiguration" factoryName="Go Test" singleton="false" nameIsGenerated="true">
<module name="ProjectOctopus"/> <module name="ProjectOctopus"/>
<target name="wdd-dev-35.70"/>
<working_directory value="$PROJECT_DIR$/agent-operator"/> <working_directory value="$PROJECT_DIR$/agent-operator"/>
<kind value="PACKAGE"/> <kind value="PACKAGE"/>
<package value="wdd.io/agent-operator"/> <package value="wdd.io/agent-operator"/>
<directory value="$PROJECT_DIR$"/> <directory value="$PROJECT_DIR$"/>
<filePath value="$PROJECT_DIR$"/> <filePath value="$PROJECT_DIR$"/>
<option name="build_on_remote_target" value="true"/>
<framework value="gotest"/> <framework value="gotest"/>
<pattern value="^\QTestUpdateCmiiDeploymentImageTag\E$"/> <pattern value="^\QTestRestartCmiiDeployment\E$"/>
<method v="2"/> <method v="2"/>
</configuration> </configuration>
</component> </component>

View File

@@ -0,0 +1,15 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="Middle镜像-35.70" type="GoTestRunConfiguration" factoryName="Go Test">
<module name="ProjectOctopus" />
<target name="wdd-dev-35.70" />
<working_directory value="$PROJECT_DIR$/agent-operator" />
<kind value="PACKAGE" />
<package value="wdd.io/agent-operator" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$" />
<option name="build_on_remote_target" value="true" />
<framework value="gotest" />
<pattern value="^\QTestFetchDependencyRepos_Middle\E$" />
<method v="2" />
</configuration>
</component>

View File

@@ -0,0 +1,15 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="Middle镜像-ARM-11.8" type="GoTestRunConfiguration" factoryName="Go Test">
<module name="ProjectOctopus" />
<target name="root@192.168.11.8:22" />
<working_directory value="$PROJECT_DIR$/agent-operator" />
<kind value="PACKAGE" />
<package value="wdd.io/agent-operator" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$" />
<option name="build_on_remote_target" value="true" />
<framework value="gotest" />
<pattern value="^\QTestFetchDependencyRepos_Middle\E$" />
<method v="2" />
</configuration>
</component>

View File

@@ -1,12 +1,20 @@
<component name="ProjectRunConfigurationManager"> <component name="ProjectRunConfigurationManager">
<configuration default="false" name="ServerApplication" type="SpringBootApplicationConfigurationType" <configuration default="false" name="ServerApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot">
factoryName="Spring Boot"> <module name="server" />
<module name="server"/> <projectPathOnTarget name="projectPathOnTarget" value="/data/wdd/ProjectOctopus" />
<projectPathOnTarget name="projectPathOnTarget" value="/data/wdd/ProjectOctopus"/> <target name="@@@LOCAL@@@" />
<target name="@@@LOCAL@@@"/> <option name="SPRING_BOOT_MAIN_CLASS" value="io.wdd.ServerApplication" />
<option name="SPRING_BOOT_MAIN_CLASS" value="io.wdd.ServerApplication"/> <method v="2">
<method v="2"> <option name="Make" enabled="true" />
<option name="Make" enabled="true"/> </method>
</method> </configuration>
</configuration> <configuration default="false" name="ServerApplication" type="SpringBootApplicationConfigurationType" factoryName="Spring Boot">
<module name="server" />
<projectPathOnTarget name="projectPathOnTarget" value="/data/wdd/ProjectOctopus" />
<target name="@@@LOCAL@@@" />
<option name="SPRING_BOOT_MAIN_CLASS" value="io.wdd.ServerApplication" />
<method v="2">
<option name="Make" enabled="true" />
</method>
</configuration>
</component> </component>

View File

@@ -0,0 +1,15 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="查询应用分支-3570" type="GoTestRunConfiguration" factoryName="Go Test">
<module name="ProjectOctopus" />
<target name="wdd-dev-35.70" />
<working_directory value="$PROJECT_DIR$/agent-operator" />
<kind value="PACKAGE" />
<package value="wdd.io/agent-operator" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$" />
<option name="build_on_remote_target" value="true" />
<framework value="gotest" />
<pattern value="^\QTestCmiiK8sOperator_DeploymentOneInterface\E$" />
<method v="2" />
</configuration>
</component>

View File

@@ -0,0 +1,16 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="清理CMII镜像-35.70" type="GoTestRunConfiguration" factoryName="Go Test">
<module name="ProjectOctopus" />
<target name="wdd-dev-35.70" />
<working_directory value="$PROJECT_DIR$/agent-operator/image" />
<root_directory value="$PROJECT_DIR$/agent-operator" />
<kind value="PACKAGE" />
<package value="wdd.io/agent-operator/image" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$" />
<option name="build_on_remote_target" value="true" />
<framework value="gotest" />
<pattern value="^\QTestImagePruneAllCmiiImages\E$" />
<method v="2" />
</configuration>
</component>

View File

@@ -0,0 +1,15 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="重启DEMO-3570" type="GoTestRunConfiguration" factoryName="Go Test">
<module name="ProjectOctopus" />
<target name="wdd-dev-35.70" />
<working_directory value="$PROJECT_DIR$/agent-operator" />
<kind value="PACKAGE" />
<package value="wdd.io/agent-operator" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$" />
<option name="build_on_remote_target" value="true" />
<framework value="gotest" />
<pattern value="^\QTestRestartCmiiDeployment\E$" />
<method v="2" />
</configuration>
</component>

View File

@@ -0,0 +1,16 @@
<component name="ProjectRunConfigurationManager">
<configuration default="false" name="院内Harbor清理-35.70" type="GoTestRunConfiguration" factoryName="Go Test">
<module name="ProjectOctopus" />
<target name="wdd-dev-35.70" />
<working_directory value="$PROJECT_DIR$/agent-operator/image" />
<root_directory value="$PROJECT_DIR$/agent-operator" />
<kind value="PACKAGE" />
<package value="wdd.io/agent-operator/image" />
<directory value="$PROJECT_DIR$" />
<filePath value="$PROJECT_DIR$" />
<option name="build_on_remote_target" value="true" />
<framework value="gotest" />
<pattern value="^\QTestHarborOperator_CmiiHarborCleanUp\E$" />
<method v="2" />
</configuration>
</component>

View File

@@ -0,0 +1,387 @@
import random
import threading
from queue import Queue
from paho.mqtt import client as mqtt_client
import numpy as np
import time
import logging
import os
import datetime
from KF_V2 import *
from utils import *
from config import *
import argparse
import json
import yaml
# 首先加载yaml配置
def load_mqtt_config():
config_path = os.getenv('CONFIG_PATH', 'config.yaml')
with open(config_path, 'r') as f:
config = yaml.safe_load(f)
return config['mqtt'], config['topics']
# 获取MQTT和topics配置
mqtt_config, topics_config = load_mqtt_config()
## =======================
# MQTT 代理地址
# broker = '192.168.36.234'
# port = 37826
# username = "cmlc"
# password = "odD8#Ve7.B"
client_id = f'python-mqtt-{random.randint(0, 100)}'
# 创建 ArgumentParser 对象
parser = argparse.ArgumentParser(description='处理命令行参数')
# 添加参数 task_id简称 t类型为 int默认值为 1
parser.add_argument('-t', '--task_id', type=str, default="+", help='任务ID')
# 添加参数 gate简称 g类型为 str默认值为 "default_gate"
parser.add_argument('-g', '--gate', type=int, default=30, help='门限值')
# 添加参数 interval简称 i类型为 float默认值为 1.0
parser.add_argument('-i', '--interval', type=float, default=1.0, help='时间间隔')
# 解析命令行参数
args = parser.parse_args()
# 实例化 DataFusion 类
fusion_instance = DataFusion(
gate=args.gate,
interval=args.interval,
)
global task_id
task_id = "10087"
# 从yaml的mqtt_topic中提取基础路径
base_path = topics_config['mqtt_topic'].split('/')[0] # 获取"bridge"
# 更新数据上报的主题格式
providerCode = "DP74b4ef9fb4aaf269"
fusionCode = "DPZYLY"
deviceType = "5ga"
fusionType = "fusion"
deviceId = "10580005"
fusionId = "554343465692430336"
sensor_id_list = ["80103"]
# 使用base_path构建topic
topic = f"{base_path}/{providerCode}/device_data/{deviceType}/{deviceId}"
# 从yaml的sensor_topic中提取基础路径
base_topic = topics_config['sensor_topic'].split('FU_PAM')[0] # 得到 "fromcheck/DP74b4ef9fb4aaf269/device_data/"
# 订阅主题 - 基于yaml格式构建
subscribe_topic = f"{base_topic}5ga/10000000000000" # 将FU_PAM替换为5ga,将+替换为具体ID
# 发布融合结果的主题
# fusionId的来源是下发任务时的ID
publish_topic = f"fromcheck/{fusionCode}/device_data/{fusionType}/{task_id}"
# 更新运行参数的主题
fusion_parameters_topic = topics_config['sensor_topic']
# 生成唯一的 client_id
# 数据池
data_pool = Queue()
run_parameter = None
interval = args.interval
# 定义参考点 PO纬度, 经度)
global reference_point
reference_point = (104.08, 30.51) # 参考点的经纬度
# 数据池
data_pool = Queue()
run_parameter = None
# 初始化数据处理类
pipe = Pipeline(fusion_parameters_topic=topics_config['sensor_topic'], reference_point=reference_point)
fusion_code = "FU_PAM/"+args.task_id
# 设置日志记录
def setup_logging():
# 创建logs目录如果不存在
if not os.path.exists('logs'):
os.makedirs('logs')
# 设置日志文件名(包含日期)
current_time = datetime.datetime.now()
error_log_filename = f'logs/mqtt_connection_{current_time.strftime("%Y%m%d")}_error.log'
# 配置总的日志记录器
logging.basicConfig(
level=logging.INFO, # 记录所有信息
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.StreamHandler() # 同时输出到控制台
]
)
# 配置错误日志记录器
error_logger = logging.getLogger('error_logger')
error_logger.setLevel(logging.ERROR)
# 创建文件处理器
error_handler = logging.FileHandler(error_log_filename)
error_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s'))
# 添加处理器到错误日志记录器
error_logger.addHandler(error_handler)
def connect_mqtt() -> mqtt_client:
def on_connect(client, userdata, flags, rc):
if rc == 0:
logging.info("Successfully connected to MQTT Broker")
logging.info(f"Client ID: {client_id}")
logging.info(f"Broker: {mqtt_config['broker']}:{mqtt_config['port']}")
# 重新订阅主题
client.subscribe(fusion_parameters_topic)
logging.info(f"Subscribed to fusion parameters topic: {fusion_parameters_topic}")
if hasattr(pipe, 'topics'):
for topic in pipe.topics:
client.subscribe(topic)
logging.info(f"Subscribed to topic: {topic}")
else:
logging.error(f"Failed to connect, return code: {rc} ({DISCONNECT_REASONS.get(rc, '未知错误')})")
def on_disconnect(client, userdata, rc):
current_time = datetime.datetime.now()
reason = DISCONNECT_REASONS.get(rc, "未知错误")
logging.warning(f"Disconnected from MQTT Broker at {current_time.strftime('%Y-%m-%d %H:%M:%S')}")
logging.warning(f"Disconnect reason code: {rc} - {reason}")
if rc != 0:
logging.error("Unexpected disconnection. Attempting to reconnect...")
try:
client.reconnect()
logging.info("Reconnection successful")
except Exception as e:
current_time = datetime.datetime.now()
logging.error(f"Reconnection failed at {current_time.strftime('%Y-%m-%d %H:%M:%S')}: {str(e)}")
logging.error(f"Exception type: {type(e).__name__}")
logging.error(f"Stack trace:", exc_info=True)
client = mqtt_client.Client(client_id, clean_session=True)
client.username_pw_set(mqtt_config['username'], mqtt_config['password'])
# 设置保活时间和重试间隔
client.keepalive = 60 # 60秒的保活时间
client.socket_timeout = 30 # 30秒的socket超时
client.reconnect_delay_set(min_delay=1, max_delay=60) # 重连延迟在1-60秒之间
# 设置遗嘱消息last will message
will_topic = f"fromcheck/{fusionCode}/status/{task_id}"
will_payload = "offline"
client.will_set(will_topic, will_payload, qos=1, retain=True)
# 设置回调函数
client.on_connect = on_connect
client.on_disconnect = on_disconnect
try:
client.connect(mqtt_config['broker'], mqtt_config['port'])
except Exception as e:
logging.error(f"Initial connection failed: {str(e)}")
logging.error(f"Exception type: {type(e).__name__}")
logging.error(f"Stack trace:", exc_info=True)
time.sleep(5)
return connect_mqtt()
# 发送上线状态
client.publish(will_topic, "online", qos=1, retain=True)
return client
def subscribe(client: mqtt_client):
def on_message(client, userdata, msg):
try:
global run_parameter
global task_id
logging.info(f"Received message on topic: {msg.topic}")
logging.info(f"Message payload: {msg.payload.decode()}")
if "FU_PAM" in msg.topic:
if args.task_id == '+' or fusion_code in msg.topic:
new_run_parameter = msg.payload.decode()
if run_parameter != new_run_parameter:
logging.info(f"Run parameter updated from {run_parameter} to {new_run_parameter}")
run_parameter = new_run_parameter
new_topics = pipe.extract_parms(run_parameter)
logging.info(f"Extracted topics: {new_topics}")
client.subscribe(new_topics) # 重新更新订阅的数据
logging.info(f"Subscribed to new topics: {new_topics}")
logging.info('===========new run_parameter!===============')
current_time = datetime.datetime.now()
task_id = pipe.task_id
else:
data_pool.put((msg.topic, msg.payload))
except Exception as e:
logging.error(f"Error processing message: {str(e)}")
logging.error(f"Exception type: {type(e).__name__}")
logging.error(f"Stack trace:", exc_info=True)
subscribe_topics = [(subscribe_topic, 0), (fusion_parameters_topic, 0)] # 默认QoS为0
client.subscribe(subscribe_topics)
client.on_message = on_message
def publish(client, message):
global task_id
global fusionCode
max_retries = 3
retry_delay = 1 # 初始重试延迟(秒)
def do_publish():
publish_topic = f"bridge/{fusionCode}/device_data/fusion/{task_id}"
try:
result = client.publish(publish_topic, message)
status = result.rc
if status == 0:
current_time = datetime.datetime.now()
formatted_time = current_time.strftime('%Y-%m-%d %H:%M:%S')
with open('log.txt', 'a') as log_file:
log_file.write('=====================\n')
log_file.write(f"Send message to topic {publish_topic}\n")
log_file.write(f"time: {formatted_time}\n")
log_file.write(f"{message}\n")
return True
else:
logging.error(f"Failed to send message to topic {publish_topic}, status: {status}")
return False
except Exception as e:
logging.error(f"Error publishing message: {str(e)}")
logging.error(f"Exception type: {type(e).__name__}")
logging.error(f"Stack trace:", exc_info=True)
return False
# 实现重试逻辑
for attempt in range(max_retries):
if do_publish():
return
if attempt < max_retries - 1: # 如果不是最后一次尝试
retry_delay *= 2 # 指数退避
logging.warning(f"Retrying publish in {retry_delay} seconds...")
time.sleep(retry_delay)
logging.error(f"Failed to publish message after {max_retries} attempts")
def data_fusion(fusion_container):
global data_pool
data_list = []
# 从数据池中提取所有的数据
while not data_pool.empty():
data_now = data_pool.get()
processed_data = pipe.process_json_data(data_now[1])
# 筛选有意义的数据
if processed_data and processed_data.get("objects"): # 只记录有 objects 的数据
data_list.append(processed_data)
if data_list: # 只有当有数据时才写日志
current_time = datetime.datetime.now()
formatted_time = current_time.strftime('%Y-%m-%d %H:%M:%S')
with open('Data_log.txt', 'a') as log_file: # 以追加模式打开日志文件
log_file.write('=====================\n') # 写入分隔符
log_file.write(f"Get message \n")
log_file.write(f"time: {formatted_time}\n") # 写入分隔符
log_file.write(f"{data_list}\n") # 写入消息内容
sensor_data = pipe.data_encoder(data_list)
logging.info(sensor_data)
filtered_results = fusion_container.run(sensor_data)
processed_data = pipe.data_decoder(filtered_results)
processed_data = json.dumps(processed_data, indent=4)
return processed_data # 返回处理后的 JSON 字符串
def fusion_runner(client):
global run_parameter
pre_run_parameter = run_parameter
last_run_time = time.time()
last_health_check = time.time()
health_check_interval = 30 # 每30秒进行一次健康检查
fusion_container = DataFusion(args.gate, args.interval)
def check_connection():
if not client.is_connected():
logging.warning("MQTT client disconnected during fusion_runner")
try:
client.reconnect()
logging.info("Successfully reconnected in fusion_runner")
return True
except Exception as e:
logging.error(f"Reconnection failed in fusion_runner: {str(e)}")
logging.error(f"Exception type: {type(e).__name__}")
logging.error(f"Stack trace:", exc_info=True)
return False
return True
while True:
try:
current_time = time.time()
# 定期健康检查
if current_time - last_health_check >= health_check_interval:
if not check_connection():
time.sleep(5) # 如果连接失败等待5秒后继续
continue
last_health_check = current_time
# 数据处理和发送
if current_time - last_run_time >= interval:
if not check_connection():
continue
last_run_time = current_time
if run_parameter != pre_run_parameter:
fusion_parms = pipe.extract_fusion_parms(run_parameter)
fusion_container.set_parameter(fusion_parms)
pre_run_parameter= run_parameter
processed_data = data_fusion(fusion_container)
if processed_data:
publish(client, processed_data)
except Exception as e:
logging.error(f"Error in fusion_runner: {str(e)}")
logging.error(f"Exception type: {type(e).__name__}")
logging.error(f"Stack trace:", exc_info=True)
time.sleep(1)
def run():
# 初始化日志系统
setup_logging()
logging.error("Starting MQTT client application")
while True: # 添加外层循环来处理完全断开的情况
try:
client = connect_mqtt()
subscribe(client)
logging.info("Starting fusion_runner thread")
fusion_runner_thread = threading.Thread(target=fusion_runner, args=(client,), daemon=True)
fusion_runner_thread.start()
logging.info("Starting MQTT loop")
client.loop_forever()
except Exception as e:
logging.critical(f"Critical error in main loop: {str(e)}")
logging.critical(f"Exception type: {type(e).__name__}")
logging.critical(f"Stack trace:", exc_info=True)
logging.info("Restarting in 5 seconds...")
time.sleep(5)
if __name__ == '__main__':
run()

View File

@@ -0,0 +1,145 @@
import json
import time
import random
from math import radians, degrees, sin, cos
from paho.mqtt import client as mqtt_client
import datetime
import numpy as np
from math import atan2, sqrt
# 坐标转换函数
def convert_to_cartesian(lat, lon, reference_point):
"""将经纬度转换为基于参考点的直角坐标,考虑地球椭球模型"""
# 地球椭球参数WGS84
a = 6378137.0 # 长半轴,单位:米
f = 1 / 298.257223563 # 扁率
e2 = 2 * f - f ** 2 # 第一偏心率平方
# 提取参考点坐标
ref_lat, ref_lon = reference_point
# 转换成弧度
lat_rad = radians(lat)
lon_rad = radians(lon)
ref_lat_rad = radians(ref_lat)
ref_lon_rad = radians(ref_lon)
# 计算曲率半径
N = a / sqrt(1 - e2 * sin(ref_lat_rad) ** 2) # 参考点处的卯酉圈曲率半径
# 计算基于参考点的平面直角坐标
delta_lon = lon_rad - ref_lon_rad
X = (N + 0) * cos(ref_lat_rad) * delta_lon
Y = (a * (1 - e2)) / (1 - e2 * sin(ref_lat_rad) ** 2) * (lat_rad - ref_lat_rad)
return X, Y
# 模拟数据生成函数
def generate_simulated_data(reference_point, radius_km, angle):
"""生成模拟数据,符合 Pipeline 处理需求"""
R = 6371000 # 地球半径(米)
# 将半径转换为弧度
radius = radius_km / R
# 计算参考点经纬度
lat0, lon0 = reference_point
# 计算新的点的经度和纬度
new_lat = lat0 + degrees(radius * cos(radians(angle)))
new_lon = lon0 + degrees(radius * sin(radians(angle)) / cos(radians(lat0)))
# 生成模拟 JSON 数据
mock_data = {
"deviceId": "80103",
"deviceType": 10,
"objects": [
{
"altitude": 150.0, # 模拟高度
"extension": {
"traceId": "00000000000001876",
"channel": "5756500000",
"objectType": 30,
"uavId": "UAS123456", # 新增字段,与 Pipeline 对应
"uavModel": "DJI Mini 3 Pro", # 模拟 UAV 型号
"deviceId": "80103" # 来源设备 ID
},
"height": 120.0, # 高度
"latitude": new_lat,
"longitude": new_lon,
"X": 0.0, # 预留字段,供转换函数填充
"Y": 0.0, # 预留字段,供转换函数填充
"speed": 15.0, # 模拟速度
"objectId": "AX0009", # 模拟目标 ID
"time": int(time.time() * 1000), # 当前时间戳(毫秒)
"source": [["sensor1", "UAS123456"]] # 模拟来源
}
],
"providerCode": "ZYLYTEST",
"ptTime": int(time.time() * 1000) # 当前时间戳(毫秒)
}
# 转换坐标
for obj in mock_data["objects"]:
lat, lon = obj["latitude"], obj["longitude"]
obj["X"], obj["Y"] = convert_to_cartesian(lat, lon, reference_point)
return json.dumps(mock_data, indent=4)
# MQTT 推送代码
broker = '192.168.36.234'
port = 37826
providerCode = "DP74b4ef9fb4aaf269"
deviceType = "5ga"
deviceId = "10580015"
topic = f"bridge/{providerCode}/device_data/{deviceType}/{deviceId}"
client_id = f'python-mqtt-{random.randint(0, 1000)}'
username = "cmlc"
password = "odD8#Ve7.B"
reference_point = (31.880000, 117.240000) # 经度和纬度
radius = 1500 # 半径,单位:米
def connect_mqtt():
"""连接 MQTT Broker"""
def on_connect(client, userdata, flags, rc):
if rc == 0:
print("Connected to MQTT Broker!")
else:
print(f"Failed to connect, return code {rc}")
client = mqtt_client.Client(client_id)
client.on_connect = on_connect
client.username_pw_set(username, password)
client.connect(broker, port)
return client
def publish(client):
"""推送生成的模拟数据"""
msg_count = 0
angle = 0
while True:
time.sleep(1)
msg = generate_simulated_data(reference_point, radius, angle)
result = client.publish(topic, msg)
status = result.rc
if status == 0:
print(f"Send `{msg_count}` to topic `{topic}`")
else:
print(f"Failed to send message to topic {topic}")
msg_count += 1
angle += 1
def run():
client = connect_mqtt()
client.loop_start()
publish(client)
if __name__ == '__main__':
run()

View File

@@ -0,0 +1,15 @@
# 构建阶段
FROM python:3.12.8-slim-bookworm as builder
WORKDIR /build
COPY requirements.txt .
RUN pip install --user -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
# 运行阶段
FROM python:3.12.8-slim-bookworm
WORKDIR /app
COPY --from=builder /root/.local/lib/python3.12/site-packages /root/.local/lib/python3.12/site-packages
COPY . .
CMD ["python", "check.py"]

View File

@@ -0,0 +1,279 @@
import datetime
from os import error
import numpy as np
from config import *
def calculate_euclidean_distances(A, B):
# 计算A和B之间的欧式距离
distances = np.linalg.norm(A - B, axis=1)
# 找到最小距离及其索引
min_distance_index = np.argmin(distances)
min_distance = distances[min_distance_index]
return min_distance, min_distance_index
def are_lists_equal(listA, listB):
# 对两个列表中的子列表进行排序
if len(listA) == 0:
return False
sorted_listA = sorted(listA, key=lambda x: (x[0], x[1]))
sorted_listB = sorted(listB, key=lambda x: (x[0], x[1]))
# 比较排序后的列表是否相等
return sorted_listA == sorted_listB
def sigmoid(x, a=10, b=0.1):
# 调整Sigmoid函数使其在x=1时值为0.5
# a和b是调整参数用于控制函数的形状
return 1 / (1 + np.exp(-a * (x - shift_value))) + b
class KalmanFilter:
def __init__(self, measurement, com_id, measurement_variance=1,process_variance=1e-1):
current_time = datetime.datetime.now()
timestamp = int(current_time.timestamp() * 1000000)
ms = measurement.tolist()
self.m = np.array([ms[0],ms[1],ms[2],0,0,0]) # 状态量,6维度
self.origin = [com_id] #origin 表示最强响应
self.source = self.origin #source 表示所有关联的观测值
self.survive = np.array(survive_initial) # 初始化生存值
self.duration = 0
self.counter = 0
self.id = str(timestamp % 3600000000 + np.random.randint(1000))
self.F = [[1,0,0,1,0,0],
[0,1,0,0,1,0],
[0,0,1,0,0,1],
[0,0,0,1,0,0],
[0,0,0,0,1,0],
[0,0,0,0,0,1]]
self.F = np.array(self.F)
self.H = [[1,0,0,0,0,0],
[0,1,0,0,0,0],
[0,0,1,0,0,0]]
self.H = np.array(self.H)
self.R = measurement_variance * np.eye(3)
self.Q = process_variance * np.eye(6)
self.Q[3, 3] = self.Q[3, 3] * 1e-3
self.Q[4, 4] = self.Q[4, 4] * 1e-3
self.Q[5, 5] = self.Q[5, 5] * 1e-3
self.P = np.eye(6)*0.1
self.I = np.eye(6)
self.expend = 1
self.v = np.array([0,0,0])
self.born_time = int(current_time.timestamp() * 1000)
self.latest_update = self.born_time
self.m_history = self.m
self.s_history = []
self.origin_set = [self.origin]
def predict(self):
F = self.F
self.m = np.dot(F,self.m.T) # 简单一步预测模型
self.m = self.m.T
self.P = np.dot(np.dot(F,self.P),F.T) + self.Q
self.survive = self.survive * decay # 应用衰减值
self.origin_set = np.unique(np.array(self.origin_set), axis=0).tolist() # 计算关联集合
def update(self, res, run_timestamp, gate):
self.duration += 0.6 # 每次更新时,持续时间+0.6
if len(res['distances']) == 0:
mmd = 1e8
else:
min_distance_index = np.argmin(res['distances'])
mmd = res['distances'][min_distance_index]
measurement = res['measurements'][min_distance_index]
# 进行更新
if mmd < gate * self.expend:
H = self.H
I = self.I
self.expend = max(self.expend * 0.8, 1)
kalman_gain = np.dot(np.dot(self.P,H.T),np.linalg.pinv(np.dot(np.dot(H,self.P),H.T)+self.R))
self.m += np.dot(kalman_gain,(measurement.T - np.dot(H,self.m.T)))
self.m = self.m.T
self.P = np.dot((I - np.dot(kalman_gain,H)),self.P)
self.origin = [res['key_ids'][min_distance_index]]
self.counter += 1
self.survive = sigmoid(self.counter) # 新映射函数
# 如下操作防止对速度过于自信
self.P[3, 3] = max(1e-1, self.P[3, 3])
self.P[4, 4] = max(1e-1, self.P[4, 4])
self.P[5, 5] = max(1e-1, self.P[5, 5])
# 截取速度
self.v = self.m[3:6]
self.origin_set.append(self.origin)
self.latest_update = run_timestamp #对时间进行处理
else:
self.expend = min(self.expend*1.2,1.5) # 若关联不上,则扩大门限继续搜索
self.P[3, 3] = min(self.P[3, 3]*1.1,1)
self.P[4, 4] = min(self.P[4, 4]*1.1,1)
self.P[5, 5] = min(self.P[5, 5]*1.1,1)
self.counter -= 1
self.counter = max(self.counter,0)
self.m_history = np.vstack((self.m_history, self.m))
self.s_history.append(self.survive)
def one_correlation(self, data_matrix, id_list):
# 计算现有数据与data_matrix的差距
min_distance, min_index = calculate_euclidean_distances(self.m[0:3], data_matrix)
m_id = id_list[min_index]
measurement = data_matrix[min_index, :]
return m_id, min_distance, measurement
def correlation(self, sensor_data):
# 遍历传感器进行计算
res = {'m_ids':[], 'distances':[], 'measurements':[], 'key_ids':[]}
for value in sensor_data:
if len(value['id_list']) > 0:
m_id, min_distance, measurement = self.one_correlation(value['data_matrix'], value['id_list'])
key = value['deviceId']
res['m_ids'].append(m_id)
res['measurements'].append(measurement)
res['key_ids'].append([key, m_id])
# 将发生过关联的目标赋予更大的置信度
if [key, m_id] in self.origin_set:
min_distance = min_distance * 0.2
res['distances'].append(min_distance)
return res
#融合类的构造函数
class DataFusion:
def __init__(self,gate=25,interval = 1,fusion_type = 1,
measuremrnt_variance=1,process_variance =1e-1):
"""
初始化DataFusion类。
"""
# self.task_id = task_id
self.interval = interval
self.gate = gate
self.targets = []
self.fusion_type = fusion_type
self.existence_thres = 0.01
self.show_thres = show_thres
self.process_variance = process_variance
self.measuremrnt_variance = measuremrnt_variance
def set_parameter(self,fusion_parms):
print("GO!!!!!!!!!")
print(fusion_parms)
def obtain_priority(self,sensor_data):
self.priority_dict = dict()
for data in sensor_data:
if data.get('priority'):
self.priority_dict[data['deviceId']] = data['priority']
else:
self.priority_dict[data['deviceId']] = 1
def out_transformer(self,target):
out_former = {
'objectId': target.id,
'survive': target.survive.tolist(),
'state': target.m.tolist(),
'speed': np.linalg.norm(target.v).tolist() / self.interval,
'source': target.source,
'sigma': np.diag(target.P).tolist(),
'X': target.m[0].tolist(),
'Y': target.m[1].tolist(),
'Z': target.m[2].tolist(),
'Vx': target.v[0].tolist(),
'Vy': target.v[1].tolist(),
'Vz': target.v[2].tolist(),
'born_time': str(target.born_time)
}
return out_former
def run(self, sensor_data):
current_time = datetime.datetime.now()
run_timestamp = int(current_time.timestamp() * 1000)
fusion_data = []
selected_list = []
self.obtain_priority(sensor_data)
# 遍历所有已知对象
for target in self.targets:
print(f"Fusion target id:{target.id} with survive: {target.survive} at :{target.m}\n")
if target.survive < self.existence_thres:
continue
target.predict()
res = target.correlation(sensor_data)
target.update(res,run_timestamp,self.gate)
# ==================================================
now_id = []
t_sum = 0
for r, distance in enumerate(res['distances']):
if distance < self.gate:
now_id.append(res['key_ids'][r])
selected_list.append(res['key_ids'][r])
D_Id = res['key_ids'][r][0]
t_sum += self.priority_dict[D_Id]
target.source = now_id
# ==================================================
if self.fusion_type == 2 and t_sum < 2:
target.survive = target.survive * 0.5
out_former = self.out_transformer(target)
if target.survive > self.show_thres: # 若存活概率大于0.4,则写入数据文件
fusion_data.append(out_former)
# 根据匹配关系筛选数值
self.selected_list = selected_list
for data in sensor_data:
self.new_born(data)
self.remove_duplicates()
# ==================================================
self.fusion_process_log(fusion_data)
return fusion_data
def new_born(self,value,):
for j, id in enumerate(value['id_list']):
key = value['deviceId']
if [key, id] not in self.selected_list:
if self.fusion_type == 3:
if value['priority'] > 50:
self.targets.append(KalmanFilter(value['data_matrix'][j, :], [key, id],self.measuremrnt_variance,self.process_variance))
else:
self.targets.append(KalmanFilter(value['data_matrix'][j, :], [key, id],self.measuremrnt_variance,self.process_variance))
self.selected_list.append([key, id]) # 把新增的目标,加入到集合中去
def remove_duplicates(self):
# 创建一个空列表用于存储需要删除的列表的索引
to_delete = []
# 遍历所有列表的索引
for i in range(len(self.targets)):
if self.targets[i].survive < self.existence_thres:
to_delete.append(self.targets[i].id)
continue
if self.targets[i].survive < self.show_thres:
continue
for j in range(i + 1, len(self.targets)):
# 比较两个列表是否相同
if are_lists_equal(self.targets[i].source, self.targets[j].source):
# 如果列表相同,记录编号较大的索引
if self.targets[i].duration < self.targets[j].duration:
to_delete.append(self.targets[i].id)
else:
to_delete.append(self.targets[j].id)
# 使用删除法,提高目标管理效率
for item_id in sorted(to_delete, reverse=True):
for target in self.targets:
if target.id == item_id:
self.targets.remove(target)
break
def fusion_process_log(self,fusion_data):
current_time = datetime.datetime.now()
# 格式化时间为年月日时分秒格式
formatted_time = current_time.strftime('%Y-%m-%d %H:%M:%S')
with open('process_log.txt', 'a') as log_file: # 以追加模式打开日志文件
log_file.write('=====================\n') # 写入分隔符
log_file.write(f"time: {formatted_time}\n") # 写入分隔符
log_file.write(f"data:\n {fusion_data}\n") # 写入消息内容

View File

@@ -0,0 +1,53 @@
from KF_V2 import *
# ======================
sensor_id_list = ['AUV01','AUV02']
sensor_data = []
sensor_data.append({
'data_matrix': np.array([[0, 0, 0], [100, 100, 100]]),
'id_list': ['001','002'],
'deviceId': 'AUV01',
'devicePs':[0.2], #第一个值表示测量误差
'latest_time': [0],
'priority':1
})
sensor_data.append({
'data_matrix': np.array([[0, 0, 0], [100, 100, 100]]),
'id_list': ['003','004'],
'deviceId': 'AUV02',
'deivceProperties':[0.2],
'latest_time': [0],
'priority':100
})
fusion_container = DataFusion(25,1,3)
for i in range(15):
print(i)
# 在循环开始时,对 sensor_data 中的 data_matrix 进行修改
if i%5 == 0:
temp = {
'data_matrix': np.array([]),
'id_list': [],
'deviceId': 'AUV01',
'devicePs': [0.2], # 第一个值表示测量误差
'latest_time': [0]
}
c_sensor_data = []
c_sensor_data.append(temp)
c_sensor_data.append(temp)
filted_results = fusion_container.run(c_sensor_data)
else:
sensor_data[0]['data_matrix'][0, :] += 1 # 第一行每个元素加1
sensor_data[0]['data_matrix'][1, :] -= 1 # 第二行每个元素减1
sensor_data[1]['data_matrix'][0, :] += 1 # 第一行每个元素加1
sensor_data[1]['data_matrix'][1, :] -= 1 # 第二行每个元素减1
filted_results = fusion_container.run(sensor_data)
print("results:\n")
for d in filted_results:
print(d)

View File

@@ -0,0 +1,142 @@
import numpy as np
from scipy import signal
class AoAConverter:
def __init__(self):
self.p = [1e8, 1e8, 1e8]
def to_cartesian(self, theta_rad, phi_rad):
# theta_rad = np.radians(theta)
# phi_rad = np.radians(phi)
# 注意!程序输入的是弧度单位
"""将球坐标转换为直角坐标"""
x = np.sin(theta_rad) * np.cos(phi_rad)
y = np.sin(theta_rad) * np.sin(phi_rad)
z = np.cos(theta_rad)
pc =np.array([x,y,z])
return pc
def calc_error(self, pc, mc):
# 计算预测坐标与实际观测坐标之间的差的平方
mc = np.expand_dims(mc, axis=1)
diff_squared = (pc - mc) ** 2
# 对差值的平方求和,得到误差的平方
error_squared = np.sum(diff_squared, axis=0)
# 开平方根得到误差
return np.sqrt(error_squared)
import numpy as np
def find_best_r(self, theta, phi, mc, r_range):
"""在给定范围内搜索最优的 r 值"""
# 将 r_range 转换为 NumPy 数组,以便进行矢量化操作
r_values = np.array(r_range)
# 计算所有可能的直角坐标
pc = self.to_cartesian(theta, phi)
# 进行维度扩充以进行矩阵乘法
r_values = np.expand_dims(r_values, axis=0)
pc = np.expand_dims(pc, axis=1)
# 计算所有 r 值对应的误差
# print([pc.shape,r_values.shape])
D = np.dot(pc, r_values)
errors = self.calc_error(D, mc)
r_values = np.squeeze(r_values)
# 找到最小误差及其对应的 r 值
min_error = np.min(errors)
best_r = r_values[np.argmin(errors)] #因为多加了一维所以这里要反求0
return [best_r,min_error]
def projected_measure(self,theta, phi, r,p0):
pc = self.to_cartesian(theta, phi)
neo_p = r*pc + p0
return np.array(neo_p)
converter = AoAConverter()
def calculate_euclidean_distances(A, BX):
# 计算A和B之间的欧式距离
B = BX['data_matrix']
N = B.shape[0]
r_range = np.linspace(-5, 5, 100)
if BX.get('AOA_pos'):
# 若是来自AOA的数据则进行替换
sensor_pos = BX.get('AOA_pos')
ob_pos = A - sensor_pos
r0 = np.linalg.norm(ob_pos)
B_new = []
for i in range(N):
theta = B[i,0]
phi = B[i,1]
[best_r,min_error] = converter.find_best_r(theta, phi,ob_pos, r0+r_range)
print(min_error)
B_new.append(converter.projected_measure(theta, phi,best_r,sensor_pos))
B_new = np.array(B_new)
else:
B_new = B
distances = np.linalg.norm(A - B_new, axis=1)
# 找到最小距离及其索引
min_distance_index = np.argmin(distances)
min_distance = distances[min_distance_index]
return [min_distance, min_distance_index, B_new]
def are_lists_equal(listA, listB):
# 对两个列表中的子列表进行排序
if len(listA) == 0:
return False
sorted_listA = sorted(listA, key=lambda x: (x[0], x[1]))
sorted_listB = sorted(listB, key=lambda x: (x[0], x[1]))
# 比较排序后的列表是否相等
return sorted_listA == sorted_listB
def sigmoid(x, a=10, b=0.1):
# 调整Sigmoid函数使其在x=4时值为0.5
# a和b是调整参数用于控制函数的形状
return 1 / (1 + np.exp(-a * (x - 1))) + b
def calculate_correlation(A, B):
"""
计算两个数组矩阵所有列的相关系数的最大值。
参数:
A -- 第一个NumPy数组
B -- 第二个NumPy数组
"""
A = np.exp(-1j*A/50)
B = np.exp(1j*B/50)
corr_res = []
for col in range(3):
a = A[:, col]
b = B[:, col]
convolution = signal.convolve(a, b[::-1])
corr_res.append(convolution)
max_corr = np.sum(np.abs(np.array(corr_res)),0)
max_corr = np.max(max_corr)/3
return max_corr
def calculate_history_distances(target, b):
# 使用前后向的形式进行计算
A = target.m_history
v = target.v
# 计算每一行与向量b的差的L2范数欧氏距离
if A.shape[0] < 10:
return np.inf
local_time = np.linspace(0, 10, 20)
local_time = np.expand_dims(local_time, axis=1)
v = np.expand_dims(v, axis=1)
A_pre = A[-10:,0:3]
A_post = np.dot(local_time,v.T)
A_all = np.vstack((A_pre, A_post))
distances = np.linalg.norm(A_all - b, axis=1)
# 找到最小距离
min_distance = np.min(distances)
return min_distance

View File

@@ -0,0 +1,26 @@
#!/bin/bash
# 使用说明,在主机之上选择你的合适的目录
# 上传的最新的项目代码,然后把这个脚本放置于你的项目目录之中
# 修改下面的参数
if [[ $# -eq 0 ]]; then
echo "tag version is null!"
exit 233
fi
tag_version=$1
echo "start to build docker image tag is => ${tag_version}"
docker build -t harbor.cdcyy.com.cn/cmii/cmii-uavms-pyfusion:${tag_version} .
echo ""
echo "login to docker hub"
docker login -u rad02_drone -p Drone@1234 harbor.cdcyy.com.cn
echo ""
echo "start to push image to hub!"
docker push harbor.cdcyy.com.cn/cmii/cmii-uavms-pyfusion:${tag_version}

View File

@@ -0,0 +1,374 @@
import os
import subprocess
import paho.mqtt.client as mqtt
import json
import time
import threading
import logging
from config import *
import datetime
import schedule # 需要先安装: pip install schedule
import yaml
# 读取yaml配置
def load_mqtt_config():
config_path = os.getenv('CONFIG_PATH', 'config.yaml')
with open(config_path, 'r') as f:
config = yaml.safe_load(f)
return config['mqtt'], config['topics']
# 获取MQTT和topics配置
mqtt_config, topics_config = load_mqtt_config()
# 设置日志配置
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('check.log'),
logging.StreamHandler()
]
)
# 存储运行中的任务及其配置
running_tasks = {}
task_configs = {}
# 启动 Dev_Fusion.py 的命令模板
fusion_command_template = f"nohup python Dev_Fusion.py -t {{task_id}} -g {DEV_FUSION_G} -i {DEV_FUSION_I} > /dev/null 2> error.log &"
# 日志文件夹路径
log_folder = "tasklog"
os.makedirs(log_folder, exist_ok=True)
# 创建全局锁
task_lock = threading.Lock()
def compare_configs(old_config, new_config):
"""
比较两个配置是否有实质性差异
返回 True 表示有差异,需要重启
返回 False 表示无差异,只需转发
"""
try:
# 1. 检查 devices 列表
old_devices = old_config.get('devices', [])
new_devices = new_config.get('devices', [])
if len(old_devices) != len(new_devices):
return True
# 为每个设备创建一个关键信息元组进行比较
def get_device_key(device):
return (
device.get('device_id'),
device.get('device_topic'),
device.get('device_type'),
device.get('reference_point')
)
old_device_keys = {get_device_key(d) for d in old_devices}
new_device_keys = {get_device_key(d) for d in new_devices}
# 如果设备的关键信息有变化,需要重启
if old_device_keys != new_device_keys:
return True
# 2. 检查参考点
old_ref = old_config.get('reference_point')
new_ref = new_config.get('reference_point')
if old_ref != new_ref:
return True
# 3. 其他参数(如 sampling_rate的变化不需要重启
logging.info("No critical configuration changes detected")
return False
except Exception as e:
logging.error(f"Error comparing configs: {str(e)}")
return True # 出错时视为有差异,安全起见重启实例
def stop_task(task_id):
"""停止指定的任务实例"""
try:
if task_id in running_tasks:
process = running_tasks[task_id]
# 使用 pkill 命令终止对应的 Python 进程
subprocess.run(f"pkill -f 'python.*Dev_Fusion.py.*-t {task_id}'", shell=True)
process.wait(timeout=5) # 等待进程结束
del running_tasks[task_id]
del task_configs[task_id]
logging.info(f"Task {task_id} stopped successfully")
except Exception as e:
logging.error(f"Error stopping task {task_id}: {str(e)}")
# 多线程处理函数
def handle_task(client, task_id, payload):
try:
with task_lock: # 使用锁保护共享资源
data = json.loads(payload)
sensor_topic = topics_config['sensor_topic'].replace("+", task_id)
# 记录配置更新
log_file = os.path.join(log_folder, f"received_tasklog_{task_id}.txt")
current_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
def log_config_update(action):
with open(log_file, "a") as f:
f.write(f"\n=== Configuration Update at {current_time} ===\n")
f.write(f"Task ID: {task_id}\n")
f.write(f"MQTT_TOPIC: {topics_config['mqtt_topic']}\n")
f.write(f"Payload: {payload}\n")
f.write(f"Action: {action}\n")
f.write("=" * 50 + "\n")
# 检查任务是否已经在运行
if task_id in running_tasks:
# 检查是否有存储的配置
if task_id in task_configs:
# 比较新旧配置
if compare_configs(task_configs[task_id], data):
logging.info(f"Configuration changed for task {task_id}, restarting...")
stop_task(task_id)
log_config_update("Configuration changed, restarting instance")
start_new_instance(client, task_id, payload, data)
else:
# 配置无变化,只转发消息
logging.info(f"No configuration change for task {task_id}, forwarding message")
log_config_update("Message forwarded (no critical changes)")
client.publish(sensor_topic, payload)
else:
# 没有存储的配置,存储新配置并转发
logging.info(f"No stored config for task {task_id}, storing first config")
task_configs[task_id] = data
log_config_update("First config stored and forwarded")
client.publish(sensor_topic, payload)
else:
# 任务不存在,启动新实例
log_config_update("New instance started")
start_new_instance(client, task_id, payload, data)
except Exception as e:
logging.error(f"Error handling task {task_id}: {str(e)}")
def start_new_instance(client, task_id, payload, config):
"""启动新的 Dev_Fusion 实例"""
try:
# 启动 Dev_Fusion.py 实例
fusion_command = fusion_command_template.format(task_id=task_id)
process = subprocess.Popen(fusion_command, shell=True)
running_tasks[task_id] = process
task_configs[task_id] = config
logging.info(f"Dev_Fusion.py started successfully for Task ID {task_id}")
# 保存日志,使用追加模式
log_file = os.path.join(log_folder, f"received_tasklog_{task_id}.txt")
current_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
with open(log_file, "a") as f: # 使用 "a" 模式追加内容
f.write(f"\n=== Configuration Update at {current_time} ===\n")
f.write(f"Task ID: {task_id}\n")
f.write(f"MQTT_TOPIC: {topics_config['mqtt_topic']}\n")
f.write(f"Payload: {payload}\n")
# 记录是否触发了重启
f.write("Action: New instance started\n")
f.write("=" * 50 + "\n")
# 等待实例启动
time.sleep(0.5)
# 发送配置
sensor_topic = topics_config['sensor_topic'].replace("+", task_id)
client.publish(sensor_topic, payload)
logging.info(f"Configuration sent to {sensor_topic}")
except Exception as e:
logging.error(f"Error starting new instance for task {task_id}: {str(e)}")
if task_id in running_tasks:
del running_tasks[task_id]
del task_configs[task_id]
# MQTT 回调函数
def on_connect(client, userdata, flags, rc):
if rc == 0:
logging.info("Connected to MQTT broker")
client.subscribe(topics_config['mqtt_topic']) # 使用yaml中的topic
else:
logging.error(f"Connection failed with code {rc}: {DISCONNECT_REASONS.get(rc, 'Unknown error')}")
def on_message(client, userdata, msg):
try:
payload = msg.payload.decode("utf-8")
logging.info(f"Received message on topic {msg.topic}")
data = json.loads(payload)
task_id = data.get("task_id")
if task_id:
thread = threading.Thread(target=handle_task, args=(client, task_id, payload))
thread.start()
else:
logging.warning("Received message without task_id")
except json.JSONDecodeError:
logging.error("Received message is not valid JSON")
except Exception as e:
logging.error(f"Error processing message: {str(e)}")
def check_running_instances():
"""检查系统中已经运行的 Dev_Fusion 实例"""
try:
# 使用 ps 命令查找运行中的 Dev_Fusion.py 实例
result = subprocess.run("ps aux | grep 'python.*Dev_Fusion.py' | grep -v grep",
shell=True, capture_output=True, text=True)
found_instances = []
for line in result.stdout.splitlines():
# 从命令行参数中提取 task_id
if '-t' in line:
parts = line.split()
for i, part in enumerate(parts):
if part == '-t' and i + 1 < len(parts):
task_id = parts[i + 1]
pid = parts[1] # 进程 ID 通常在第二列
found_instances.append((task_id, pid))
for task_id, pid in found_instances:
logging.info(f"Found running instance for task {task_id}, pid: {pid}")
# 读取该任务的最新配置
config = read_latest_config(task_id)
if config:
# 将已运行的实例添加到 running_tasks
running_tasks[task_id] = subprocess.Popen(['echo', ''], stdout=subprocess.PIPE)
running_tasks[task_id].pid = int(pid)
task_configs[task_id] = config
logging.info(
f"Successfully loaded config for task {task_id} from tasklog/received_tasklog_{task_id}.txt")
else:
logging.warning(f"No valid config found for task {task_id}, stopping instance...")
subprocess.run(f"pkill -f 'python.*Dev_Fusion.py.*-t {task_id}'", shell=True)
logging.info(f"Stopped instance {task_id} due to missing config")
logging.info(f"Finished checking instances. Loaded {len(running_tasks)} tasks with valid configs")
except Exception as e:
logging.error(f"Error checking running instances: {str(e)}")
def read_latest_config(task_id):
"""读取指定任务的最新配置"""
try:
log_file = os.path.join(log_folder, f"received_tasklog_{task_id}.txt")
if not os.path.exists(log_file):
logging.error(f"No log file found for task {task_id}")
return None
with open(log_file, 'r') as f:
content = f.read()
# 按配置更新块分割
updates = content.split('=== Configuration Update at')
if not updates:
return None
# 获取最后一个更新块
latest_update = updates[-1]
# 提取 Payload
payload_start = latest_update.find('Payload: ') + len('Payload: ')
payload_end = latest_update.find('\nAction:')
if payload_end == -1: # 如果没有 Action 行
payload_end = latest_update.find('\n===')
if payload_start > 0 and payload_end > payload_start:
payload = latest_update[payload_start:payload_end].strip()
return json.loads(payload)
return None
except Exception as e:
logging.error(f"Error reading latest config for task {task_id}: {str(e)}")
return None
def restart_all_instances():
"""重启所有运行中的实例"""
logging.info("Scheduled restart: Beginning restart of all instances")
# 复制当前运行的任务列表,因为我们会修改 running_tasks
tasks_to_restart = list(running_tasks.keys())
for task_id in tasks_to_restart:
try:
# 读取最新配置
config = read_latest_config(task_id)
if not config:
logging.error(f"Could not find latest config for task {task_id}, skipping restart")
continue
# 停止当前实例
logging.info(f"Stopping task {task_id} for scheduled restart")
stop_task(task_id)
# 将配置转换为 JSON 字符串
payload = json.dumps(config)
# 启动新实例
logging.info(f"Starting new instance for task {task_id} with latest config")
start_new_instance(mqtt_client, task_id, payload, config)
except Exception as e:
logging.error(f"Error restarting task {task_id}: {str(e)}")
def setup_scheduled_restart(restart_time="03:00"):
"""设置定时重启任务"""
schedule.every().day.at(restart_time).do(restart_all_instances)
def run_schedule():
while True:
schedule.run_pending()
time.sleep(30) # 每30秒检查一次
# 启动调度器线程
scheduler_thread = threading.Thread(target=run_schedule, daemon=True)
scheduler_thread.start()
def main():
global mqtt_client # 添加全局变量以在重启时使用
# 在启动时检查已运行的实例
check_running_instances()
# 创建 MQTT 客户端
mqtt_client = mqtt.Client()
mqtt_client.on_connect = on_connect
mqtt_client.on_message = on_message
mqtt_client.username_pw_set(mqtt_config['username'], mqtt_config['password'])
# 设置定时重启默认凌晨3点
setup_scheduled_restart()
while True:
try:
mqtt_client.connect(mqtt_config['broker'], mqtt_config['port'], 60)
mqtt_client.loop_forever()
except Exception as e:
logging.error(f"MQTT connection error: {str(e)}")
time.sleep(5)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,10 @@
mqtt:
broker: "192.168.35.178"
port: 31884
username: "cmlc"
password: "4YPk*DS%+5"
topics:
mqtt_topic: "bridge/DP74b4ef9fb4aaf269/device_data/FU_PAM/+"
sensor_topic: "fromcheck/DP74b4ef9fb4aaf269/device_data/FU_PAM/+"

View File

@@ -0,0 +1,67 @@
# # MQTT 配置
# broker = "192.168.35.178" # 代理地址
# port = 31883 # 端口
# username = "cmlc"
# password = "odD8#Ve7.B"
#
# # check.py 使用的topic
# MQTT_TOPIC = "bridge/DP74b4ef9fb4aaf269/device_data/FU_PAM/+"
#
# # Dev_Fusion.py 使用的topic
# SENSOR_TOPIC = "fromcheck/DP74b4ef9fb4aaf269/device_data/FU_PAM/+"
# 在 check 中去配置 Dev_Fusion.py启动命令
DEV_FUSION_G = 40 # 参数 g
DEV_FUSION_I = 0.6 # 参数 i
#KF_V2设置
shift_value = 1
survive_initial = 0.25
decay = 0.7
show_thres = 0.4
reference_point = (104.08, 30.51)
# logs 配置
DISCONNECT_REASONS = {
0: "正常断开",
1: "协议版本不匹配",
2: "客户端标识符无效",
3: "服务器不可用",
4: "用户名或密码错误",
5: "未授权",
6: "消息代理不可用",
7: "TLS错误",
8: "QoS不支持",
9: "客户端已被禁止",
10: "服务器繁忙",
11: "客户端已被禁止(证书相关)",
128: "未指定错误",
129: "畸形数据包",
130: "协议错误",
131: "通信错误",
132: "服务器保持连接超时",
133: "服务器内部错误",
134: "服务器正在关闭",
135: "服务器资源不足",
136: "客户端网络套接字错误",
137: "服务器正在关闭连接",
138: "服务器拒绝连接",
139: "服务器不支持该版本",
140: "客户端ID已被使用",
141: "连接速率超限",
142: "最大连接数超限",
143: "保持连接超时",
144: "会话被接管",
145: "连接已断开",
146: "主题别名无效",
147: "数据包太大",
148: "消息速率太高",
149: "配额超限",
150: "管理行为",
151: "无效的负载格式",
152: "保留未支持",
153: "QoS未支持",
154: "使用另一个服务器",
155: "服务器已迁移",
156: "连接不支持",
}

View File

@@ -0,0 +1,10 @@
mqtt:
broker: "192.168.35.178"
port: 31883
username: "cmlc"
password: "odD8#Ve7.B"
topics:
mqtt_topic: "bridge/DP74b4ef9fb4aaf269/device_data/FU_PAM/+"
sensor_topic: "fromcheck/DP74b4ef9fb4aaf269/device_data/FU_PAM/+"

View File

@@ -0,0 +1,19 @@
try {
$ErrorActionPreference = "Stop"
Write-Host "1. Uploading binary exec..." -ForegroundColor Green
ssh root@192.168.35.71 "mkdir -p /root/wdd/ranjing-python-devfusion/"
scp C:\Users\wdd\IdeaProjects\ProjectOctopus\agent-common\SplitProject\ranjing-python-devfusion\* root@192.168.35.71:/root/wdd/ranjing-python-devfusion/
Write-Host "2. Exec the command ..." -ForegroundColor Blue
Write-Host ""
Write-Host ""
ssh root@192.168.35.71 "cd /root/wdd/ranjing-python-devfusion/ && docker build -t ranjing/dev-fusion:v1.0 ."
Write-Host ""
Write-Host ""
} catch {
Write-Host "操作失败: $_" -ForegroundColor Red
exit 1
}

View File

@@ -0,0 +1,8 @@
#!/bin/bash
docker run --name devfusion \
-d \
--rm \
-v /root/wdd/ranjing-python-devfusion/config-dev.yaml:/dev-fusion/config.yaml \
harbor.cdcdyy.com.cn/cmii/cmii-uavms-pyfusion:6.2.0

View File

@@ -0,0 +1,62 @@
from math import radians, cos, degrees
from math import radians, degrees, sin, cos, atan2, sqrt
def convert_to_cartesian(lat, lon, reference_point):
"""将经纬度转换为基于参考点的直角坐标,考虑地球椭球模型"""
# 地球椭球参数WGS84
a = 6378137.0 # 长半轴,单位:米
f = 1 / 298.257223563 # 扁率
e2 = 2 * f - f ** 2 # 第一偏心率平方
# 提取参考点坐标
ref_lat, ref_lon = reference_point
# 转换成弧度
lat_rad = radians(lat)
lon_rad = radians(lon)
ref_lat_rad = radians(ref_lat)
ref_lon_rad = radians(ref_lon)
# 计算曲率半径
N = a / sqrt(1 - e2 * sin(ref_lat_rad) ** 2) # 参考点处的卯酉圈曲率半径
# 计算基于参考点的平面直角坐标
delta_lon = lon_rad - ref_lon_rad
X = (N + 0) * cos(ref_lat_rad) * delta_lon
Y = (a * (1 - e2)) / (1 - e2 * sin(ref_lat_rad) ** 2) * (lat_rad - ref_lat_rad)
return X, Y
def convert_to_geodetic(x, y, reference_point):
"""将直角坐标转换为经纬度,考虑地球椭球模型"""
# 地球椭球参数WGS84
a = 6378137.0 # 长半轴,单位:米
f = 1 / 298.257223563 # 扁率
e2 = 2 * f - f ** 2 # 第一偏心率平方
# 提取参考点坐标
ref_lat, ref_lon = reference_point
# 转换成弧度
ref_lat_rad = radians(ref_lat)
ref_lon_rad = radians(ref_lon)
# 计算曲率半径
N = a / sqrt(1 - e2 * sin(ref_lat_rad) ** 2) # 参考点处的卯酉圈曲率半径
# 计算纬度
lat_rad = y * (1 - e2 * sin(ref_lat_rad) ** 2) / (a * (1 - e2)) + ref_lat_rad
# 计算经度
if cos(ref_lat_rad) == 0:
lon_rad = 0
else:
lon_rad = x / ((N + 0) * cos(ref_lat_rad)) + ref_lon_rad
# 转换回角度
lat = degrees(lat_rad)
lon = degrees(lon_rad)
return lat, lon

View File

@@ -0,0 +1,423 @@
import datetime
from transformation import *
import json
import numpy as np
class Pipeline:
def __init__(self, fusion_parameters_topic,reference_point):
self.fusion_parameters_topic = fusion_parameters_topic
self.task_id = '554343465692430336'
self.reference_point = reference_point
# self.deviceId = deviceId
self.sensor_id_list = ["10000000000"]
self.fusionCode = 'DPZYLY'
self.publish_topic = f"bridge/{self.fusionCode}/device_data/fusion/{self.task_id}"
self.priority_dict = {"10000000000":1}
self.uavInfo_bucket = dict()
self.target_bowl = dict()
self.device_info_dict = dict()
self.device_type_mapping = {
"5ga": 0,
"radar": 1,
"spec": 2,
"oe": 3,
"cm": 4,
"dec": 5,
"ifr": 6,
"cv": 7,
"isrs": 8,
"aoa": 9,
"tdoa": 10,
"dcd": 11,
"direct": 100,
"rtk": 101,
"rid": 102,
"fusion": 1000,
"other": 999 # 假设 'other' 对应于未知设备类型
}
self.device_type_speedrank = {
"radar": 1,
"spec": 2,
"oe": 3,
"cm": 4,
"dec": 5,
"ifr": 6,
"cv": 7,
"isrs": 8,
"aoa": 9,
"tdoa": 10,
"dcd": 13,
"direct": 12,
"5ga": 11,
"rid": 14,
"rtk": 15,
"other": 0 # 假设 'other' 对应于未知设备类型
}
import json
def process_json_data(self, json_data):
"""
将 JSON 数据转换为字典,并添加 X 和 Y 属性。
"""
data_dict = json.loads(json_data)
# 安全访问 'ptTime' 键
pt_time = data_dict.get('ptTime')
if pt_time is not None:
print(pt_time)
else:
print("Key 'ptTime' not found in data_dict.")
# 安全访问 'objects' 键
objects = data_dict.get('objects')
if objects is None:
print("Key 'objects' not found in data_dict.")
return data_dict # 如果 'objects' 键不存在,直接返回原始字典或根据需要进行其他处理
# 如果 'objects' 键存在,继续处理
for record in objects:
# 检查 'latitude' 和 'longitude' 键是否存在于 record 中
if 'latitude' in record and 'longitude' in record:
lat = record['latitude']
lon = record['longitude']
X, Y = convert_to_cartesian(lat, lon, self.reference_point)
record['X'] = X
record['Y'] = Y
else:
print("Record is missing 'latitude' or 'longitude' keys.")
return data_dict
def data_encoder(self, data_list):
"""
生成数据矩阵和 ID 列表。
"""
sensor_data = []
for sensor_id in self.sensor_id_list:
temp = {'data_matrix': [],
'id_list': [],
'deviceId': sensor_id,
'latest_time': [],
'priority':1}
for record in data_list:
if record.get('noteData'):
obj = record['noteData']
obj['objectId'] = obj['uasId']
obj['deviceId'] = obj["extension"]['deviceId']
record['objects'] = [obj]
if record['deviceId'] == sensor_id:
temp['priority'] = self.priority_dict[sensor_id]
if record.get('objects'):
for obj in record['objects']:
if obj['objectId'] in temp['id_list']:
position = temp['id_list'].index(obj['objectId'])
if int(record['ptTime']) > int(temp['latest_time'][position]):
temp['data_matrix'][position] = [obj['X'], obj['Y'], obj['altitude']]
else:
temp['data_matrix'].append([obj['X'], obj['Y'], obj['altitude']])
temp['id_list'].append(obj['objectId'])
temp['latest_time'].append(record['ptTime'])
# 把扩展地段写入
if obj.get('extension'):
B_id = str(record['deviceId'])+str(obj['objectId'])
self.uavInfo_bucket[B_id] = obj['extension']
# 如果对象有speed字段将其添加到extension中
if obj.get('speed'):
self.uavInfo_bucket[B_id]['speed'] = obj['speed']
# 如果对象有height字段也存储它
if obj.get('height'):
self.uavInfo_bucket[B_id]['height'] = obj['height']
# 写入到数据字典中
temp['data_matrix'] = np.array(temp['data_matrix'])
sensor_data.append(temp)
return sensor_data
def process_extension(self, target):
# 定义一个字典,包含给定的键值对
extension = {
"objectType": 30,
"uavSN": "Un-known",
"uavModel": "Un-known",
"pilotLat": 0.0,
"pilotLon": 0.0,
"speedX": 0.0,
"speedY": 0.0,
"speedZ": 0.0,
"time": 0.0,
"born_time": 0.0
}
# 从target_bowl获取历史值
if target['objectId'] in self.target_bowl.keys():
extension = self.target_bowl[target['objectId']]
result_source = target['source']
# 对数据进行更新
for source in result_source:
id = str(source[0]) + str(source[1])
if self.uavInfo_bucket.get(id):
for key, value in self.uavInfo_bucket[id].items():
# 只有当新值是有效值时才更新
if value not in ["Un-known", 0.0, None, "Unknown", "DJI Mavic"]:
extension[key] = value
extension['born_time'] = int(target['born_time'])
# 更新target_bowl以保持状态
self.target_bowl[target['objectId']] = extension
return extension
def data_decoder(self, filtered_results):
"""
解码过滤后的结果。
"""
current_time = datetime.datetime.now()
timestamp = int(current_time.timestamp() * 1000)
combined_objects = []
for target in filtered_results:
X = target['X']
Y = target['Y']
Z = target['Z'] # 这里的Z实际上是altitude
lat, lon = convert_to_geodetic(X, Y, self.reference_point)
extension = self.process_extension(target)
extension['time'] = int(timestamp)
extension['born_time'] = int(int(target['born_time']) / 1000) # 毫秒单位数据
new_origin_source = []
for source in target['source']:
device_id, object_id = source
# 从 device_info_dict 获取设备缩写
device_abbreviation = self.device_info_dict.get(device_id, {}).get('device_type', 'other')
# 使用映射字典获取设备类型
device_type = self.device_type_mapping.get(device_abbreviation, 999)
new_origin_source.append(f"{device_type}_{device_id}_{object_id}")
# 根据优先级顺序选择速度
highest_priority_speed = None
highest_priority = float('inf')
for source in target['source']:
device_id, object_id = source
B_id = str(device_id) + str(object_id)
if self.uavInfo_bucket.get(B_id):
device_type = self.device_info_dict.get(device_id, {}).get('device_type', 'other')
priority = self.device_type_speedrank.get(device_type, float('inf'))
if priority < highest_priority:
highest_priority = priority
# 获取速度并进行单位转换
speed = self.uavInfo_bucket[B_id].get('speed', target['speed'])
if device_type == "5ga": # 如果设备类型是5ga进行转换
speed = speed / 3.6 # 从 km/h 转换为 m/s
highest_priority_speed = speed
# 确保 highest_priority_speed 是从设备获取的速度
if highest_priority_speed is None:
# 如果没有找到当前速度,查找历史记录中的速度
for obj in reversed(combined_objects):
if obj["objectId"] == target['objectId']:
highest_priority_speed = obj.get("speed")
break
if highest_priority_speed is None:
print(f"Warning: No speed found for target {target['objectId']}, using default target speed.")
new_speed = target['speed']
else:
new_speed = highest_priority_speed
else:
new_speed = highest_priority_speed
# Debug 输出,检查速度来源
print(f"Selected speed for target {target['objectId']}: {new_speed} from device with priority {highest_priority}")
# 获取height字段
height = None
for source in target['source']:
device_id, object_id = source
B_id = str(device_id) + str(object_id)
if self.uavInfo_bucket.get(B_id):
if self.uavInfo_bucket[B_id].get('height'):
height = self.uavInfo_bucket[B_id]['height']
break
# 如果当前没有获取到height查找历史记录中的height
if height is None:
for obj in reversed(combined_objects):
if obj["objectId"] == target['objectId']:
prev_height = obj.get("height")
if prev_height is not None: # 如果找到有效的历史height
height = prev_height
break
# 如果仍然没有找到height保持上一次的最新历史height
if height is None and combined_objects:
for obj in reversed(combined_objects):
if obj["objectId"] == target['objectId']:
height = obj.get("height")
break
temp = {
# "msg_cnt":result['msg_cnt'],#增加msg_cnt用于检测有无丢包
"objectId": target['objectId'],
"X": X,
"Y": Y,
"height": height, # 使用当前height或历史height
"altitude": Z,
"speed": new_speed, # 使用优先级最高的速度
'latitude': lat,
'longitude': lon,
'sigma': target['sigma'],
"extension": {
"origin_source": new_origin_source, # 更新后的 origin_source
# 其他extension字段...
"objectType": extension.get('objectType', 0),
"uavSN": extension.get("uavSN", "Un-known"),
"uavModel": extension.get("uavModel", "Un-known"),
"pilotLat": extension.get("pilotLat", 0.0),
"pilotLon": extension.get("pilotLon", 0.0),
"speedX": 0.0, # 不再使用速度分量
"speedY": 0.0,
"speedZ": 0.0,
"time": int(timestamp),
"born_time": int(int(target['born_time']) / 1000),
},
"time": int(timestamp),
}
# 检查extension中的objectType是否已经被设置为非0值如果是则不再覆盖.
if extension.get('objectType', 0) != 0 or target['objectId'] not in [obj['objectId'] for obj in
combined_objects]:
temp["extension"]["objectType"] = extension.get('objectType', 0)
else:
# 查找combined_objects中相同objectId的objectType如果不存在则使用0
existing_object_types = [obj["extension"].get('objectType', 0) for obj in combined_objects if
obj["objectId"] == target['objectId']]
if existing_object_types and existing_object_types[0] != 0:
temp["extension"]["objectType"] = existing_object_types[0]
else:
temp["extension"]["objectType"] = 0
# 检查并更新uavSN和uavModel
invalid_values = ["Un-known", 0.0, None, "Unknown", "DJI Mavic"]
# 检查uavSN是否为字母数字组合防止其他部分引入奇怪的值
current_sn = extension.get('uavSN', "Un-known")
if isinstance(current_sn, str):
has_letter = any(c.isalpha() for c in current_sn)
has_digit = any(c.isdigit() for c in current_sn)
if not (has_letter and has_digit):
# 先查找相同objectId的历史有效SN
for obj in reversed(combined_objects):
if obj["objectId"] == target['objectId']:
prev_sn = obj["extension"].get("uavSN", "Un-known")
if isinstance(prev_sn, str):
has_letter = any(c.isalpha() for c in prev_sn)
has_digit = any(c.isdigit() for c in prev_sn)
if has_letter and has_digit:
current_sn = prev_sn
break
temp["extension"]["uavSN"] = current_sn
temp["extension"]["uavModel"] = extension.get('uavModel', "Un-known")
combined_objects.append(temp)
data_processed = {
"deviceType": 1000,
"providerCode": "DPZYLY",
"deviceId": self.task_id,
"objects": combined_objects,
"ptTime": int(timestamp)
}
# 筛选有意义的数据
if data_processed and data_processed.get("objects") and len(data_processed["objects"]) > 0:
formatted_time = current_time.strftime('%Y-%m-%d %H:%M:%S')
with open('PB_log.txt', 'a') as log_file: # 以追加模式打开日志文件
log_file.write('=====================\n') # 写入分隔符
log_file.write(f"time: {formatted_time}\n") # 写入时间戳
log_file.write(f"data: {data_processed}\n")
return data_processed
def extract_parms(self, parm_data):
"""
提取参数。
"""
id_list = [] # 存储设备ID
priority_dict = {} # 存储设备优先级
device_info_dict = {} # 新增:存储设备详细信息的字典,用于后续拿到与
data_dict = json.loads(parm_data)
print(data_dict)
self.task_id = data_dict['task_id']
new_topics = [("fromcheck/DPZYLY/fly_data/rtk/#", 0)]
devices = data_dict['devices']
for device in devices:
device_id = device['device_id']
if device_id:
id_list.append(device_id)
new_topics.append((device["device_topic"], 0))
# 存储设备优先级默认优先级为1
if device.get('priority'):
priority_dict[device_id] = device['priority']
else:
priority_dict[device_id] = 1
# 使用列表存储设备的详细信息topic、type、sampling_rate完成一对多
device_info_dict[device_id] = {
'device_topic': device['device_topic'],
'device_type': device['device_type'],
'sampling_rate': device['properties'].get('sampling_rate', 1) # 默认为None如果没有提供
}
self.priority_dict = priority_dict
self.device_info_dict = device_info_dict # 将设备信息字典存储到实例变量中
self.sensor_id_list = id_list
# 处理参考点
if data_dict.get('reference_point'):
try:
original_reference_point = data_dict['reference_point']
if len(original_reference_point) == 2: # 确保是包含两个元素的元组或列表
self.reference_point = (
float(original_reference_point[0]) + 0,
float(original_reference_point[1]) + 0
)
else:
raise ValueError("Invalid reference_point structure. Must be a tuple or list with two elements.")
except Exception as e:
print(f"Error processing reference_point: {e}")
self.reference_point = None # 或者设置为某个默认值
return new_topics
def extract_fusion_parms(self,parm_data):
data_dict = json.loads(parm_data)
# 定义 fusion_dict 字典,包含需要从 data_dict 中提取的键
fusion_dict = {
"fusion_type": 1,
"gate": 1,
"interval": 1,
"show_thres": 0.4
}
# 检查 data_dict 中是否存在对应的键,并更新 fusion_dict 中的值
if "fusion_type" in data_dict:
fusion_dict["fusion_type"] = data_dict["fusion_type"]
if "gate" in data_dict:
fusion_dict["gate"] = data_dict["gate"]
if "interval" in data_dict:
fusion_dict["interval"] = data_dict["interval"]
if "show_thres" in data_dict:
fusion_dict["show_thres"] = data_dict["show_thres"]
# 返回更新后的 fusion_dict
return fusion_dict

View File

@@ -0,0 +1,71 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: fe-configmap
namespace: doriscluster
labels:
app.kubernetes.io/component: fe
data:
fe.conf: |
CUR_DATE=`date +%Y%m%d-%H%M%S`
# the output dir of stderr and stdout
LOG_DIR = ${DORIS_HOME}/log
JAVA_OPTS="-Djavax.security.auth.useSubjectCredsOnly=false -Xss4m -Xmx8192m -XX:+UseMembar -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xloggc:$DORIS_HOME/log/fe.gc.log.$CUR_DATE"
# For jdk 9+, this JAVA_OPTS will be used as default JVM options
JAVA_OPTS_FOR_JDK_9="-Djavax.security.auth.useSubjectCredsOnly=false -Xss4m -Xmx8192m -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xlog:gc*:$DORIS_HOME/log/fe.gc.log.$CUR_DATE:time"
# INFO, WARN, ERROR, FATAL
sys_log_level = INFO
# NORMAL, BRIEF, ASYNC
sys_log_mode = NORMAL
# Default dirs to put jdbc drivers,default value is ${DORIS_HOME}/jdbc_drivers
# jdbc_drivers_dir = ${DORIS_HOME}/jdbc_drivers
http_port = 8030
arrow_flight_sql_port = 9090
rpc_port = 9020
query_port = 9030
edit_log_port = 9010
enable_fqdn_mode = true
---
apiVersion: v1
kind: ConfigMap
metadata:
name: be-configmap
namespace: doriscluster
labels:
app.kubernetes.io/component: be
data:
be.conf: |
CUR_DATE=`date +%Y%m%d-%H%M%S`
PPROF_TMPDIR="$DORIS_HOME/log/"
JAVA_OPTS="-Xmx1024m -DlogPath=$DORIS_HOME/log/jni.log -Xloggc:$DORIS_HOME/log/be.gc.log.$CUR_DATE -Djavax.security.auth.useSubjectCredsOnly=false -Dsun.java.command=DorisBE -XX:-CriticalJNINatives -DJDBC_MIN_POOL=1 -DJDBC_MAX_POOL=100 -DJDBC_MAX_IDLE_TIME=300000 -DJDBC_MAX_WAIT_TIME=5000"
# For jdk 9+, this JAVA_OPTS will be used as default JVM options
JAVA_OPTS_FOR_JDK_9="-Xmx1024m -DlogPath=$DORIS_HOME/log/jni.log -Xlog:gc:$DORIS_HOME/log/be.gc.log.$CUR_DATE -Djavax.security.auth.useSubjectCredsOnly=false -Dsun.java.command=DorisBE -XX:-CriticalJNINatives -DJDBC_MIN_POOL=1 -DJDBC_MAX_POOL=100 -DJDBC_MAX_IDLE_TIME=300000 -DJDBC_MAX_WAIT_TIME=5000"
# since 1.2, the JAVA_HOME need to be set to run BE process.
# JAVA_HOME=/path/to/jdk/
# https://github.com/apache/doris/blob/master/docs/zh-CN/community/developer-guide/debug-tool.md#jemalloc-heap-profile
# https://jemalloc.net/jemalloc.3.html
JEMALLOC_CONF="percpu_arena:percpu,background_thread:true,metadata_thp:auto,muzzy_decay_ms:15000,dirty_decay_ms:15000,oversize_threshold:0,lg_tcache_max:20,prof:false,lg_prof_interval:32,lg_prof_sample:19,prof_gdump:false,prof_accum:false,prof_leak:false,prof_final:false"
JEMALLOC_PROF_PRFIX=""
# INFO, WARNING, ERROR, FATAL
sys_log_level = INFO
# ports for admin, web, heartbeat service
be_port = 9060
webserver_port = 8040
heartbeat_service_port = 9050
arrow_flight_sql_port = 39091
brpc_port = 8060

View File

@@ -0,0 +1,101 @@
apiVersion: doris.selectdb.com/v1
kind: DorisCluster
metadata:
labels:
app.kubernetes.io/name: doriscluster
name: doriscluster-helm
namespace: doriscluster
spec:
feSpec:
replicas: 1
image: harbor.cdcyy.com.cn/cmii/doris.fe-ubuntu:2.1.6
limits:
cpu: 8
memory: 16Gi
requests:
cpu: 2
memory: 6Gi
configMapInfo:
# use kubectl create configmap fe-configmap --from-file=fe.conf
configMapName: fe-configmap
resolveKey: fe.conf
nodeSelector:
uavcloud.env: demo
persistentVolumes:
- mountPath: /opt/apache-doris/fe/doris-meta
name: doriscluster-storage0
persistentVolumeClaimSpec:
# when use specific storageclass, the storageClassName should reConfig, example as annotation.
storageClassName: nfs-prod-distribute
accessModes:
- ReadWriteOnce
resources:
# notice: if the storage size less 5G, fe will not start normal.
requests:
storage: 300Gi
- mountPath: /opt/apache-doris/fe/log
name: doriscluster-storage1
persistentVolumeClaimSpec:
# when use specific storageclass, the storageClassName should reConfig, example as annotation.
storageClassName: nfs-prod-distribute
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
- mountPath: /opt/apache-doris/fe/jdbc_drivers
name: doriscluster-storage-fe-jdbc-drivers
persistentVolumeClaimSpec:
# when use specific storageclass, the storageClassName should reConfig, example as annotation.
storageClassName: nfs-prod-distribute
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
beSpec:
replicas: 3
image: harbor.cdcyy.com.cn/cmii/doris.be-ubuntu:2.1.6
limits:
cpu: 8
memory: 24Gi
requests:
cpu: 2
memory: 6Gi
configMapInfo:
# use kubectl create configmap be-configmap --from-file=be.conf
configMapName: be-configmap
resolveKey: be.conf
nodeSelector:
uavcloud.env: demo
persistentVolumes:
- mountPath: /opt/apache-doris/be/storage
name: doriscluster-storage2
persistentVolumeClaimSpec:
# when use specific storageclass, the storageClassName should reConfig, example as annotation.
storageClassName: nfs-prod-distribute
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 300Gi
- mountPath: /opt/apache-doris/be/log
name: doriscluster-storage3
persistentVolumeClaimSpec:
# when use specific storageclass, the storageClassName should reConfig, example as annotation.
storageClassName: nfs-prod-distribute
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
- mountPath: /opt/apache-doris/be/jdbc_drivers
name: doriscluster-storage-be-jdbc-drivers
persistentVolumeClaimSpec:
# when use specific storageclass, the storageClassName should reConfig, example as annotation.
storageClassName: nfs-prod-distribute
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,347 @@
# Source: doris-operator/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/name: serviceaccount
app.kubernetes.io/instance: controller-doris-operator-sa
app.kubernetes.io/component: rbac
app.kubernetes.io/created-by: doris-operator
app.kubernetes.io/part-of: doris-operator
app.kubernetes.io/managed-by: Helm
name: doris-operator
namespace: doriscluster
---
# Source: doris-operator/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: doris-operator
rules:
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- statefulsets/status
verbs:
- get
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- persistentvolumeclaims
verbs:
- get
- list
- watch
- update
- patch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- doris.selectdb.com
resources:
- dorisclusters
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- doris.selectdb.com
resources:
- dorisclusters/finalizers
verbs:
- update
- apiGroups:
- doris.selectdb.com
resources:
- dorisclusters/status
verbs:
- get
- patch
- update
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- rbac.authorization.k8s.io
resources:
- rolebindings
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
---
# Source: doris-operator/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: clusterrolebinding
app.kubernetes.io/instance: doris-operator-rolebinding
app.kubernetes.io/component: rbac
app.kubernetes.io/created-by: doris-operator
app.kubernetes.io/part-of: doris-operator
app.kubernetes.io/managed-by: Helm
name: doris-operator-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: doris-operator
subjects:
- kind: ServiceAccount
name: doris-operator
namespace: doriscluster
---
# Source: doris-operator/templates/leader-election-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/name: role
app.kubernetes.io/instance: leader-election-role
app.kubernetes.io/component: rbac
app.kubernetes.io/created-by: doris-operator
app.kubernetes.io/part-of: doris-operator
app.kubernetes.io/managed-by: Helm
name: leader-election-role
namespace: doriscluster
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
# Source: doris-operator/templates/leader-election-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/name: rolebinding
app.kubernetes.io/instance: leader-election-rolebinding
app.kubernetes.io/component: rbac
app.kubernetes.io/created-by: doris-operator
app.kubernetes.io/part-of: doris-operator
app.kubernetes.io/managed-by: Helm
name: leader-election-rolebinding
namespace: doriscluster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: leader-election-role
subjects:
- kind: ServiceAccount
name: doris-operator
namespace: doriscluster
---
# Source: doris-operator/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: doris-operator
namespace: doriscluster
labels:
control-plane: doris-operator
app.kubernetes.io/name: deployment
app.kubernetes.io/instance: doris-operator
app.kubernetes.io/component: doris-operator
app.kubernetes.io/created-by: doris-operator
app.kubernetes.io/part-of: doris-operator
spec:
selector:
matchLabels:
control-plane: doris-operator
replicas: 1
template:
metadata:
annotations:
kubectl.kubernetes.io/default-container: doris-operator
labels:
control-plane: doris-operator
spec:
# TODO(user): Uncomment the following code to configure the nodeAffinity expression
# according to the platforms which are supported by your solution.
# It is considered best practice to support multiple architectures. You can
# build your manager image using the makefile target docker-buildx.
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/arch
# operator: In
# values:
# - amd64
# - arm64
# - ppc64le
# - s390x
# - key: kubernetes.io/os
# operator: In
# values:
# - linux
securityContext:
runAsNonRoot: true
# TODO(user): For common cases that do not require escalating privileges
# it is recommended to ensure that all your Pods/Containers are restrictive.
# More info: https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted
# Please uncomment the following code if your project does NOT have to work on old Kubernetes
# versions < 1.19 or on vendors versions which do NOT support this field by default (i.e. Openshift < 4.11 ).
# seccompProfile:
# type: RuntimeDefault
containers:
- command:
- /dorisoperator
args:
- --leader-elect
image: harbor.cdcyy.com.cn/cmii/doris.k8s-operator:1.3.1
name: dorisoperator
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- "ALL"
livenessProbe:
httpGet:
path: /healthz
port: 8081
initialDelaySeconds: 15
periodSeconds: 20
readinessProbe:
httpGet:
path: /readyz
port: 8081
initialDelaySeconds: 5
periodSeconds: 10
# TODO(user): Configure the resources accordingly based on the project requirements.
# More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
resources:
requests:
cpu: 2
memory: 4Gi
limits:
cpu: 2
memory: 4Gi
serviceAccountName: doris-operator
terminationGracePeriodSeconds: 10

View File

@@ -1,5 +1,341 @@
package real_project package real_project
var Cmii620ArmImageList = []string{
"harbor.cdcyy.com.cn/cmii/cmii-uav-integration:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-oauth:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-multilink:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uas-gateway:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-sync:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-admin-data:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-open-gateway:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-autowaypoint:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-emergency:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-mission:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-notice:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-threedsimulation:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-user:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-bridge:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uas-lifecycle:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-brain:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-industrial-portfolio:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-airspace:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-clusters:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-depotautoreturn:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-tower:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-iot-dispatcher:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-data-post-process:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-developer:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-device:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-gateway:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-kpi-monitor:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-mqtthandler:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-gis-server:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-advanced5g:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-suav-supervision:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-process:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-waypoint:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-datasource:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-sky-converge:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-admin-gateway:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-admin-user:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-alarm:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-surveillance:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-engine:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-manage:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-fwdd:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-app-release:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-cloud-live:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-cms:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-logger:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-material-warehouse:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-sense-adapter:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uavms-security-center:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-emergency-rescue:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-visualization:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-uasms:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-uas:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-armypeople:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-base:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-suav-platform-supervision:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-oms:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-qinghaitourism:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-share:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uavms-platform-security-center:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-hljtt:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-open:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-splice:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-logistics:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-security:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-securityh5:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-threedsimulation:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-pilot2-to-cloud:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-qingdao:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-seniclive:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-suav-platform-supervisionh5:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-ai-brain:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-cms-portal:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-multiterminal:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-dispatchh5:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-detection:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-jiangsuwenlv:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-media:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-mws:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-uavms-platform-manager:6.2.0-szgz-arm",
"harbor.cdcyy.com.cn/cmii/cmii-live-operator:5.2.0",
"harbor.cdcyy.com.cn/cmii/srs:v5.0.195",
"harbor.cdcyy.com.cn/cmii/cmii-srs-oss-adaptor:2023-SA-skip-CHL",
}
var Cmii620ImageList = []string{
"harbor.cdcyy.com.cn/cmii/cmii-open-gateway:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-depotautoreturn:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-integration:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-manage:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-process:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-threedsimulation:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-bridge:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uas-lifecycle:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-airspace:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-data-post-process:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-industrial-portfolio:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-logger:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-mission:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-sense-adapter:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-app-release:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-autowaypoint:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-cms:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-developer:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-oauth:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uas-gateway:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-iot-dispatcher:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-cloud-live:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-clusters:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-device:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-material-warehouse:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-waypoint:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-multilink:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-gis-server:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-sync:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-brain:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-mqtthandler:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-notice:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-tower:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-advanced5g:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-admin-gateway:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-alarm:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-emergency:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-user:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-kpi-monitor:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-datasource:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-engine:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-surveillance:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uavms-security-center:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-fwdd:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-admin-data:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-admin-user:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-suav-supervision:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-gateway:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-ai-brain:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-uas:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-suav-platform-supervisionh5:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-emergency-rescue:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-media:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-oms:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-threedsimulation:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-dispatchh5:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-detection:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-jiangsuwenlv:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-logistics:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-multiterminal:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-qinghaitourism:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-splice:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-open:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-seniclive:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uavms-platform-security-center:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-suav-platform-supervision:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-uasms:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-pilot2-to-cloud:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-base:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-hljtt:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-securityh5:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-share:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-cms-portal:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-mws:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-security:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-visualization:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-armypeople:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-qingdao:6.2.0-demo",
"harbor.cdcyy.com.cn/cmii/cmii-live-operator:5.2.0",
"harbor.cdcyy.com.cn/cmii/srs:v5.0.195",
"harbor.cdcyy.com.cn/cmii/cmii-srs-oss-adaptor:2023-SA-skip-CHL",
}
var Cmii611ImageList = []string{
"harbor.cdcyy.com.cn/cmii/cmii-uav-brain:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-depotautoreturn:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-threedsimulation:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-bridge:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-engine:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-kpi-monitor:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-logger:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-notice:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-admin-data:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-app-release:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-autowaypoint:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-developer:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-device:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uas-gateway:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-alarm:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-material-warehouse:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-waypoint:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-datasource:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uavms-security-center:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-admin-user:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-suav-supervision:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-cms:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-gateway:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-manage:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-sense-adapter:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-airspace:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-process:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-tower:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-multilink:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uas-lifecycle:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-data-post-process:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-sync:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-mqtthandler:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-oauth:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-iot-dispatcher:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-admin-gateway:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-cloud-live:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-emergency:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-industrial-portfolio:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-mission:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-fwdd:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-gis-server:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-open-gateway:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-clusters:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-integration:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-surveillance:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-user:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-detection:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-open:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-threedsimulation:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uavms-platform-security-center:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-share:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-splice:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-visualization:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-pilot2-to-cloud:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-ai-brain:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-armypeople:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-cms-portal:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-uasms:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-base:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-emergency-rescue:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-qinghaitourism:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-security:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-hljtt:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-logistics:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-securityh5:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-jiangsuwenlv:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-mws:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-oms:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-uas:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-suav-platform-supervisionh5:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-multiterminal:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-suav-platform-supervision:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-media:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-qingdao:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-seniclive:6.1.1",
"harbor.cdcyy.com.cn/cmii/cmii-live-operator:5.2.0",
"harbor.cdcyy.com.cn/cmii/srs:v5.0.195",
"harbor.cdcyy.com.cn/cmii/cmii-srs-oss-adaptor:2023-SA-skip-CHL",
}
var Cmii600ImageList = []string{
"harbor.cdcyy.com.cn/cmii/cmii-uav-gateway:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-ruoyi:2024102802",
"harbor.cdcyy.com.cn/cmii/cmii-uav-threedsimulation:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-user:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-alarm:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-brain:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-tower:5.8.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-datasource:5.2.0-24810",
"harbor.cdcyy.com.cn/cmii/cmii-uav-sense-adapter:6.0.0-snapshot-1026-db-confidence-bird",
"harbor.cdcyy.com.cn/cmii/cmii-admin-data:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-suav-supervision:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-data-post-process:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uas-gateway:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-surveillance:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-industrial-portfolio:6.0.0-31369-102401",
"harbor.cdcyy.com.cn/cmii/cmii-uav-logger:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-mqtthandler:6.0.0-31369-yunnan-092402",
"harbor.cdcyy.com.cn/cmii/cmii-uav-oauth:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-iam-gateway:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-device:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-integration:5.7.0-32108-0930",
"harbor.cdcyy.com.cn/cmii/cmii-uav-advanced5g:6.0.0-102001",
"harbor.cdcyy.com.cn/cmii/cmii-uav-mission:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-cloud-live:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-emergency:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-notice:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-depotautoreturn:5.5.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-multilink:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-app-release:4.2.0-validation",
"harbor.cdcyy.com.cn/cmii/cmii-open-gateway:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-airspace:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-admin-gateway:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-gis-server:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-engine:5.1.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-manage:5.1.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-kpi-monitor:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-developer:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-material-warehouse:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-process:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-clusters:5.2.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-waypoint:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-admin-user:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-cms:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uas-lifecycle:6.0.0-102901",
"harbor.cdcyy.com.cn/cmii/cmii-uav-autowaypoint:4.2.0-beta",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-armypeople:6.0.0-32443-102201",
"harbor.cdcyy.com.cn/cmii/cmii-suav-platform-supervision:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-hljtt:5.7.0-hjltt",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-mws:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-qinghaitourism:4.1.0-21377-0508",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-share:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-oms:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-splice:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-base:5.4.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-pilot2-to-cloud:6.0.0-092502",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-emergency-rescue:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-logistics:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-seniclive:5.2.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-media:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-threedsimulation:5.2.0-21392",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-cms-portal:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-jiangsuwenlv:4.1.3-jiangsu-0427",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-open:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-uas:6.0.0-102301",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-security:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-multiterminal:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-qingdao:5.7.0-29766-0815",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform:6.0.0-master600",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-ai-brain:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-visualization:5.2.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-detection:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-securityh5:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-dispatchh5:5.6.0-0708",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-uasms:6.0.0-31981",
"harbor.cdcyy.com.cn/cmii/cmii-suav-platform-supervisionh5:6.0.0",
"harbor.cdcyy.com.cn/cmii/cmii-srs-oss-adaptor:2023-SA",
"harbor.cdcyy.com.cn/cmii/cmii-live-operator:5.2.0",
"harbor.cdcyy.com.cn/cmii/ossrs/srs:v5.0.195",
}
var Cmii570ImageList = []string{ var Cmii570ImageList = []string{
"harbor.cdcyy.com.cn/cmii/cmii-uas-gateway:5.6.0", "harbor.cdcyy.com.cn/cmii/cmii-uas-gateway:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-depotautoreturn:5.5.0", "harbor.cdcyy.com.cn/cmii/cmii-uav-depotautoreturn:5.5.0",

View File

@@ -2,7 +2,7 @@ kind: Deployment
apiVersion: apps/v1 apiVersion: apps/v1
metadata: metadata:
name: cmii-uav-iot-dispatcher name: cmii-uav-iot-dispatcher
namespace: ynejpt namespace: hbyd
labels: labels:
app.kubernetes.io/app-version: 5.7.0 app.kubernetes.io/app-version: 5.7.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
@@ -28,7 +28,7 @@ spec:
claimName: nfs-backend-log-pvc claimName: nfs-backend-log-pvc
containers: containers:
- name: cmii-uav-iot-dispatcher - name: cmii-uav-iot-dispatcher
image: '192.168.118.14:8033/cmii/cmii-uav-iot-dispatcher:5.7.0' image: '192.168.0.10:8033/cmii/cmii-uav-iot-dispatcher:6.1.0'
ports: ports:
- name: pod-port - name: pod-port
containerPort: 8080 containerPort: 8080
@@ -37,7 +37,7 @@ spec:
- name: ENV - name: ENV
value: develop value: develop
- name: VERSION - name: VERSION
value: 5.7.0 value: 6.0.0
- name: NACOS_SYSTEM_CONFIG_NAME - name: NACOS_SYSTEM_CONFIG_NAME
value: cmii-backend-system value: cmii-backend-system
- name: NACOS_SERVICE_CONFIG_NAME - name: NACOS_SERVICE_CONFIG_NAME
@@ -53,7 +53,7 @@ spec:
- name: SVC_NAME - name: SVC_NAME
value: cmlc-uav-iot-dispatcher-svc value: cmlc-uav-iot-dispatcher-svc
- name: K8S_NAMESPACE - name: K8S_NAMESPACE
value: ynejpt value: hbyd
- name: APPLICATION_NAME - name: APPLICATION_NAME
value: cmii-uav-iot-dispatcher value: cmii-uav-iot-dispatcher
- name: CUST_JAVA_OPTS - name: CUST_JAVA_OPTS
@@ -68,11 +68,11 @@ spec:
- name: NACOS_DISCOVERY_PORT - name: NACOS_DISCOVERY_PORT
value: '8080' value: '8080'
- name: BIZ_CONFIG_GROUP - name: BIZ_CONFIG_GROUP
value: 5.7.0 value: 6.0.0
- name: SYS_CONFIG_GROUP - name: SYS_CONFIG_GROUP
value: 5.7.0 value: 6.0.0
- name: IMAGE_VERSION - name: IMAGE_VERSION
value: 5.7.0 value: 6.0.0
resources: resources:
limits: limits:
cpu: '2' cpu: '2'
@@ -107,7 +107,7 @@ kind: Service
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: cmii-uav-iot-dispatcher name: cmii-uav-iot-dispatcher
namespace: ynejpt namespace: hbyd
labels: labels:
app.kubernetes.io/app-version: 5.7.0 app.kubernetes.io/app-version: 5.7.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control

View File

@@ -0,0 +1,138 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: pyfusion-configmap
namespace: uavcloud-devflight
data:
config.yaml: |-
mqtt:
broker: "helm-emqxs"
port: 1883
username: "cmlc"
password: "4YPk*DS%+5"
topics:
mqtt_topic: "bridge/DP74b4ef9fb4aaf269/device_data/FU_PAM/+"
sensor_topic: "fromcheck/DP74b4ef9fb4aaf269/device_data/FU_PAM/+"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: cmii-uavms-pyfusion
namespace: uavcloud-devflight
labels:
app.kubernetes.io/app-version: 6.2.0
app.kubernetes.io/managed-by: octopus-control
cmii.app: cmii-uavms-pyfusion
cmii.type: backend
octopus/control: backend-app-1.0.0
spec:
replicas: 1
selector:
matchLabels:
cmii.app: cmii-uavms-pyfusion
cmii.type: backend
template:
metadata:
creationTimestamp: null
labels:
cmii.app: cmii-uavms-pyfusion
cmii.type: backend
spec:
volumes:
- name: nfs-backend-log-volume
persistentVolumeClaim:
claimName: nfs-backend-log-pvc
- name: pyfusion-conf
configMap:
name: pyfusion-configmap
items:
- key: config.yaml
path: config.yaml
containers:
- name: cmii-uavms-pyfusion
image: 'harbor.cdcyy.com.cn/cmii/cmii-uavms-pyfusion:6.2.0'
ports:
- name: pod-port
containerPort: 8080
protocol: TCP
env:
- name: VERSION
value: 6.2.0
- name: NACOS_SYSTEM_CONFIG_NAME
value: cmii-backend-system
- name: NACOS_SERVICE_CONFIG_NAME
value: cmii-uavms-pyfusion
- name: NACOS_SERVER_ADDRESS
value: 'helm-nacos:8848'
- name: K8S_NAMESPACE
value: uavcloud-devflight
- name: APPLICATION_NAME
value: cmii-uavms-pyfusion
- name: NACOS_DISCOVERY_PORT
value: '8080'
- name: BIZ_CONFIG_GROUP
value: 6.2.0
- name: SYS_CONFIG_GROUP
value: 6.2.0
- name: IMAGE_VERSION
value: 6.2.0
resources:
limits:
cpu: '2'
memory: 3Gi
requests:
cpu: 200m
memory: 500Mi
volumeMounts:
- name: nfs-backend-log-volume
mountPath: /cmii/logs
subPath: uavcloud-devflight/cmii-uavms-pyfusion
- name: pyfusion-conf
mountPath: /app/config.yaml
subPath: config.yaml
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: { }
imagePullSecrets:
- name: harborsecret
affinity: { }
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
---
kind: Service
apiVersion: v1
metadata:
name: cmii-uavms-pyfusion
namespace: uavcloud-devflight
labels:
app.kubernetes.io/app-version: 6.2.0
app.kubernetes.io/managed-by: octopus-control
cmii.app: cmii-uavms-pyfusion
cmii.type: backend
octopus/control: backend-app-1.0.0
spec:
ports:
- name: backend-tcp
protocol: TCP
port: 8080
targetPort: 8080
selector:
cmii.app: cmii-uavms-pyfusion
cmii.type: backend
type: ClusterIP
sessionAffinity: None

View File

@@ -0,0 +1,91 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cmii-uav-platform-renyike
namespace: uavcloud-devoperation
labels:
cmii.type: frontend
cmii.app: cmii-uav-platform-renyike
octopus.control: frontend-app-wdd
app.kubernetes.io/app-version: 5.7.0
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
cmii.type: frontend
cmii.app: cmii-uav-platform-renyike
template:
metadata:
labels:
cmii.type: frontend
cmii.app: cmii-uav-platform-renyike
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: cmii-uav-platform-renyike
image: harbor.cdcyy.com.cn/cmii/cmii-uav-platform-renyike:6.0.0-20241202
imagePullPolicy: Always
env:
- name: K8S_NAMESPACE
value: uavcloud-devoperation
- name: APPLICATION_NAME
value: cmii-uav-platform-renyike
ports:
- name: platform-9528
containerPort: 9528
protocol: TCP
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 50m
memory: 50Mi
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d/nginx.conf
subPath: nginx.conf
- name: tenant-prefix
subPath: ingress-config.js
mountPath: /home/cmii-platform/dist/ingress-config.js
volumes:
- name: nginx-conf
configMap:
name: nginx-cm
items:
- key: nginx.conf
path: nginx.conf
- name: tenant-prefix
configMap:
name: tenant-prefix-splice
items:
- key: ingress-config.js
path: ingress-config.js
---
apiVersion: v1
kind: Service
metadata:
name: cmii-uav-platform-renyike
namespace: uavcloud-devoperation
labels:
cmii.type: frontend
cmii.app: cmii-uav-platform-renyike
octopus.control: frontend-app-wdd
app.kubernetes.io/version: 5.7.0
spec:
type: NodePort
selector:
cmii.type: frontend
cmii.app: cmii-uav-platform-renyike
ports:
- name: web-svc-port
port: 9528
protocol: TCP
targetPort: 9528
nodePort: 33333
---

View File

@@ -0,0 +1,271 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cmii-uav-platform-classification
namespace: uavcloud-devflight
labels:
cmii.type: frontend
cmii.app: cmii-uav-platform-classification
octopus.control: frontend-app-wdd
app.kubernetes.io/app-version: 5.7.0
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
cmii.type: frontend
cmii.app: cmii-uav-platform-classification
template:
metadata:
labels:
cmii.type: frontend
cmii.app: cmii-uav-platform-classification
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: cmii-uav-platform-classification
image: harbor.cdcyy.com.cn/cmii/cmii-uav-platform-classification:5.6.0
imagePullPolicy: Always
env:
- name: K8S_NAMESPACE
value: uavcloud-devflight
- name: APPLICATION_NAME
value: cmii-uav-platform-classification
ports:
- name: platform-9528
containerPort: 9528
protocol: TCP
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 50m
memory: 50Mi
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d/nginx.conf
subPath: nginx.conf
- name: tenant-prefix
subPath: ingress-config.js
mountPath: /home/cmii-platform/dist/ingress-config.js
volumes:
- name: nginx-conf
configMap:
name: nginx-cm
items:
- key: nginx.conf
path: nginx.conf
- name: tenant-prefix
configMap:
name: tenant-prefix-splice
items:
- key: ingress-config.js
path: ingress-config.js
---
apiVersion: v1
kind: Service
metadata:
name: cmii-uav-platform-classification
namespace: uavcloud-devflight
labels:
cmii.type: frontend
cmii.app: cmii-uav-platform-classification
octopus.control: frontend-app-wdd
app.kubernetes.io/version: 5.7.0
spec:
type: ClusterIP
selector:
cmii.type: frontend
cmii.app: cmii-uav-platform-classification
ports:
- name: web-svc-port
port: 9528
protocol: TCP
targetPort: 9528
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cmii-uav-platform-scanner
namespace: uavcloud-devflight
labels:
cmii.type: frontend
cmii.app: cmii-uav-platform-scanner
octopus.control: frontend-app-wdd
app.kubernetes.io/app-version: 5.7.0
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
cmii.type: frontend
cmii.app: cmii-uav-platform-scanner
template:
metadata:
labels:
cmii.type: frontend
cmii.app: cmii-uav-platform-scanner
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: cmii-uav-platform-scanner
image: harbor.cdcyy.com.cn/cmii/cmii-uav-platform-scanner:5.6.0
imagePullPolicy: Always
env:
- name: K8S_NAMESPACE
value: uavcloud-devflight
- name: APPLICATION_NAME
value: cmii-uav-platform-scanner
ports:
- name: platform-9528
containerPort: 9528
protocol: TCP
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 50m
memory: 50Mi
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d/nginx.conf
subPath: nginx.conf
- name: tenant-prefix
subPath: ingress-config.js
mountPath: /home/cmii-platform/dist/ingress-config.js
volumes:
- name: nginx-conf
configMap:
name: nginx-cm
items:
- key: nginx.conf
path: nginx.conf
- name: tenant-prefix
configMap:
name: tenant-prefix-splice
items:
- key: ingress-config.js
path: ingress-config.js
---
apiVersion: v1
kind: Service
metadata:
name: cmii-uav-platform-scanner
namespace: uavcloud-devflight
labels:
cmii.type: frontend
cmii.app: cmii-uav-platform-scanner
octopus.control: frontend-app-wdd
app.kubernetes.io/version: 5.7.0
spec:
type: ClusterIP
selector:
cmii.type: frontend
cmii.app: cmii-uav-platform-scanner
ports:
- name: web-svc-port
port: 9528
protocol: TCP
targetPort: 9528
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cmii-uav-platform-blockchain
namespace: uavcloud-devflight
labels:
cmii.type: frontend
cmii.app: cmii-uav-platform-blockchain
octopus.control: frontend-app-wdd
app.kubernetes.io/app-version: 5.7.0
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
cmii.type: frontend
cmii.app: cmii-uav-platform-blockchain
template:
metadata:
labels:
cmii.type: frontend
cmii.app: cmii-uav-platform-blockchain
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: cmii-uav-platform-blockchain
image: harbor.cdcyy.com.cn/cmii/cmii-uav-platform-blockchain:5.6.0
imagePullPolicy: Always
env:
- name: K8S_NAMESPACE
value: uavcloud-devflight
- name: APPLICATION_NAME
value: cmii-uav-platform-blockchain
ports:
- name: platform-9528
containerPort: 9528
protocol: TCP
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: 50m
memory: 50Mi
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/conf.d/nginx.conf
subPath: nginx.conf
- name: tenant-prefix
subPath: ingress-config.js
mountPath: /home/cmii-platform/dist/ingress-config.js
volumes:
- name: nginx-conf
configMap:
name: nginx-cm
items:
- key: nginx.conf
path: nginx.conf
- name: tenant-prefix
configMap:
name: tenant-prefix-splice
items:
- key: ingress-config.js
path: ingress-config.js
---
apiVersion: v1
kind: Service
metadata:
name: cmii-uav-platform-blockchain
namespace: uavcloud-devflight
labels:
cmii.type: frontend
cmii.app: cmii-uav-platform-blockchain
octopus.control: frontend-app-wdd
app.kubernetes.io/version: 5.7.0
spec:
type: ClusterIP
selector:
cmii.type: frontend
cmii.app: cmii-uav-platform-blockchain
ports:
- name: web-svc-port
port: 9528
protocol: TCP
targetPort: 9528
---

View File

@@ -0,0 +1,561 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cmii-uav-blockchain
namespace: uavcloud-devflight
labels:
cmii.type: backend
cmii.app: cmii-uav-blockchain
octopus/control: backend-app-1.0.0
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/app-version: 5.7.0
spec:
replicas: 0
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
cmii.type: backend
cmii.app: cmii-uav-blockchain
template:
metadata:
labels:
cmii.type: backend
cmii.app: cmii-uav-blockchain
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: uavcloud.env
operator: In
values:
- devflight
imagePullSecrets:
- name: harborsecret
containers:
- name: cmii-uav-blockchain
image: harbor.cdcyy.com.cn/cmii/cmii-uav-blockchain:3.2.2-snapshot
imagePullPolicy: Always
env:
- name: K8S_NAMESPACE
value: uavcloud-devflight
- name: APPLICATION_NAME
value: cmii-uav-blockchain
- name: CUST_JAVA_OPTS
value: "-Xms200m -Xmx1500m -Dlog4j2.formatMsgNoLookups=true"
- name: NACOS_REGISTRY
value: "helm-nacos:8848"
- name: NACOS_DISCOVERY_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NACOS_DISCOVERY_PORT
value: "8080"
- name: BIZ_CONFIG_GROUP
value: 5.7.0
- name: SYS_CONFIG_GROUP
value: 5.7.0
- name: IMAGE_VERSION
value: 5.7.0
- name: NACOS_USERNAME
value: "developer"
- name: NACOS_PASSWORD
value: "Deve@9128201"
ports:
- name: pod-port
containerPort: 8080
protocol: TCP
resources:
limits:
memory: 2Gi
cpu: "2"
requests:
memory: 200Mi
cpu: 200m
livenessProbe:
httpGet:
path: /cmii/health
port: pod-port
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /cmii/health
port: pod-port
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
startupProbe:
httpGet:
path: /cmii/health
port: pod-port
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 3
periodSeconds: 20
successThreshold: 1
failureThreshold: 5
volumeMounts:
- name: nfs-backend-log-volume
mountPath: /cmii/logs
readOnly: false
subPath: uavcloud-devflight/cmii-uav-blockchain
volumes:
- name: nfs-backend-log-volume
persistentVolumeClaim:
claimName: nfs-backend-log-pvc
---
apiVersion: v1
kind: Service
metadata:
name: cmii-uav-blockchain
namespace: uavcloud-devflight
labels:
cmii.type: backend
cmii.app: cmii-uav-blockchain
octopus/control: backend-app-1.0.0
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/app-version: 5.7.0
spec:
type: ClusterIP
selector:
cmii.type: backend
cmii.app: cmii-uav-blockchain
ports:
- name: backend-tcp
port: 8080
protocol: TCP
targetPort: 8080
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cmii-uav-container-scanner
namespace: uavcloud-devflight
labels:
cmii.type: backend
cmii.app: cmii-uav-container-scanner
octopus/control: backend-app-1.0.0
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/app-version: 5.7.0
spec:
replicas: 0
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
cmii.type: backend
cmii.app: cmii-uav-container-scanner
template:
metadata:
labels:
cmii.type: backend
cmii.app: cmii-uav-container-scanner
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: uavcloud.env
operator: In
values:
- devflight
imagePullSecrets:
- name: harborsecret
containers:
- name: cmii-uav-container-scanner
image: harbor.cdcyy.com.cn/cmii/cmii-uav-container-scanner:5.6.0
imagePullPolicy: Always
env:
- name: K8S_NAMESPACE
value: uavcloud-devflight
- name: APPLICATION_NAME
value: cmii-uav-container-scanner
- name: CUST_JAVA_OPTS
value: "-Xms200m -Xmx1500m -Dlog4j2.formatMsgNoLookups=true"
- name: NACOS_REGISTRY
value: "helm-nacos:8848"
- name: NACOS_DISCOVERY_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NACOS_DISCOVERY_PORT
value: "8080"
- name: BIZ_CONFIG_GROUP
value: 5.7.0
- name: SYS_CONFIG_GROUP
value: 5.7.0
- name: IMAGE_VERSION
value: 5.7.0
- name: NACOS_USERNAME
value: "developer"
- name: NACOS_PASSWORD
value: "Deve@9128201"
ports:
- name: pod-port
containerPort: 8080
protocol: TCP
resources:
limits:
memory: 2Gi
cpu: "2"
requests:
memory: 200Mi
cpu: 200m
livenessProbe:
httpGet:
path: /cmii/health
port: pod-port
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /cmii/health
port: pod-port
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
startupProbe:
httpGet:
path: /cmii/health
port: pod-port
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 3
periodSeconds: 20
successThreshold: 1
failureThreshold: 5
volumeMounts:
- name: nfs-backend-log-volume
mountPath: /cmii/logs
readOnly: false
subPath: uavcloud-devflight/cmii-uav-container-scanner
volumes:
- name: nfs-backend-log-volume
persistentVolumeClaim:
claimName: nfs-backend-log-pvc
---
apiVersion: v1
kind: Service
metadata:
name: cmii-uav-container-scanner
namespace: uavcloud-devflight
labels:
cmii.type: backend
cmii.app: cmii-uav-container-scanner
octopus/control: backend-app-1.0.0
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/app-version: 5.7.0
spec:
type: ClusterIP
selector:
cmii.type: backend
cmii.app: cmii-uav-container-scanner
ports:
- name: backend-tcp
port: 8080
protocol: TCP
targetPort: 8080
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cmii-uav-container-scanner-go
namespace: uavcloud-devflight
labels:
cmii.type: backend
cmii.app: cmii-uav-container-scanner-go
octopus/control: backend-app-1.0.0
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/app-version: 5.7.0
spec:
replicas: 0
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
cmii.type: backend
cmii.app: cmii-uav-container-scanner-go
template:
metadata:
labels:
cmii.type: backend
cmii.app: cmii-uav-container-scanner-go
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: uavcloud.env
operator: In
values:
- devflight
imagePullSecrets:
- name: harborsecret
containers:
- name: cmii-uav-container-scanner-go
image: harbor.cdcyy.com.cn/cmii/cmii-uav-container-scanner-go:5.6.0
imagePullPolicy: Always
env:
- name: K8S_NAMESPACE
value: uavcloud-devflight
- name: APPLICATION_NAME
value: cmii-uav-container-scanner-go
- name: CUST_JAVA_OPTS
value: "-Xms200m -Xmx1500m -Dlog4j2.formatMsgNoLookups=true"
- name: NACOS_REGISTRY
value: "helm-nacos:8848"
- name: NACOS_DISCOVERY_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NACOS_DISCOVERY_PORT
value: "8080"
- name: BIZ_CONFIG_GROUP
value: 5.7.0
- name: SYS_CONFIG_GROUP
value: 5.7.0
- name: IMAGE_VERSION
value: 5.7.0
- name: NACOS_USERNAME
value: "developer"
- name: NACOS_PASSWORD
value: "Deve@9128201"
ports:
- name: pod-port
containerPort: 8080
protocol: TCP
resources:
limits:
memory: 2Gi
cpu: "2"
requests:
memory: 200Mi
cpu: 200m
livenessProbe:
httpGet:
path: /cmii/health
port: pod-port
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /cmii/health
port: pod-port
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
startupProbe:
httpGet:
path: /cmii/health
port: pod-port
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 3
periodSeconds: 20
successThreshold: 1
failureThreshold: 5
volumeMounts:
- name: nfs-backend-log-volume
mountPath: /cmii/logs
readOnly: false
subPath: uavcloud-devflight/cmii-uav-container-scanner-go
volumes:
- name: nfs-backend-log-volume
persistentVolumeClaim:
claimName: nfs-backend-log-pvc
---
apiVersion: v1
kind: Service
metadata:
name: cmii-uav-container-scanner-go
namespace: uavcloud-devflight
labels:
cmii.type: backend
cmii.app: cmii-uav-container-scanner-go
octopus/control: backend-app-1.0.0
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/app-version: 5.7.0
spec:
type: ClusterIP
selector:
cmii.type: backend
cmii.app: cmii-uav-container-scanner-go
ports:
- name: backend-tcp
port: 8080
protocol: TCP
targetPort: 8080
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cmii-uav-data-classification
namespace: uavcloud-devflight
labels:
cmii.type: backend
cmii.app: cmii-uav-data-classification
octopus/control: backend-app-1.0.0
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/app-version: 5.7.0
spec:
replicas: 0
strategy:
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
cmii.type: backend
cmii.app: cmii-uav-data-classification
template:
metadata:
labels:
cmii.type: backend
cmii.app: cmii-uav-data-classification
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: uavcloud.env
operator: In
values:
- devflight
imagePullSecrets:
- name: harborsecret
containers:
- name: cmii-uav-data-classification
image: harbor.cdcyy.com.cn/cmii/cmii-uav-data-classification:5.6.0
imagePullPolicy: Always
env:
- name: K8S_NAMESPACE
value: uavcloud-devflight
- name: APPLICATION_NAME
value: cmii-uav-data-classification
- name: CUST_JAVA_OPTS
value: "-Xms200m -Xmx1500m -Dlog4j2.formatMsgNoLookups=true"
- name: NACOS_REGISTRY
value: "helm-nacos:8848"
- name: NACOS_DISCOVERY_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NACOS_DISCOVERY_PORT
value: "8080"
- name: BIZ_CONFIG_GROUP
value: 5.7.0
- name: SYS_CONFIG_GROUP
value: 5.7.0
- name: IMAGE_VERSION
value: 5.7.0
- name: NACOS_USERNAME
value: "developer"
- name: NACOS_PASSWORD
value: "Deve@9128201"
ports:
- name: pod-port
containerPort: 8080
protocol: TCP
resources:
limits:
memory: 2Gi
cpu: "2"
requests:
memory: 200Mi
cpu: 200m
livenessProbe:
httpGet:
path: /cmii/health
port: pod-port
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /cmii/health
port: pod-port
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
startupProbe:
httpGet:
path: /cmii/health
port: pod-port
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 3
periodSeconds: 20
successThreshold: 1
failureThreshold: 5
volumeMounts:
- name: nfs-backend-log-volume
mountPath: /cmii/logs
readOnly: false
subPath: uavcloud-devflight/cmii-uav-data-classification
volumes:
- name: nfs-backend-log-volume
persistentVolumeClaim:
claimName: nfs-backend-log-pvc
---
apiVersion: v1
kind: Service
metadata:
name: cmii-uav-data-classification
namespace: uavcloud-devflight
labels:
cmii.type: backend
cmii.app: cmii-uav-data-classification
octopus/control: backend-app-1.0.0
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/app-version: 5.7.0
spec:
type: ClusterIP
selector:
cmii.type: backend
cmii.app: cmii-uav-data-classification
ports:
- name: backend-tcp
port: 8080
protocol: TCP
targetPort: 8080
---

View File

@@ -0,0 +1,110 @@
{\rtf1\ansi\ansicpg936\deff0\deflang1033\deflangfe2052{\fonttbl{\f0\fnil\fcharset129 Courier New;}{\f1\fmodern\fprq6\fcharset134 \'cb\'ce\'cc\'e5;}}
\viewkind4\uc1\pard\lang2052\f0\fs10 \'a6\'a3\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a4
\par \'a6\'a2 ? MobaXterm 12.4 ? \'a6\'a2
\par \'a6\'a2 (SSH client, X-server and networking tools) \'a6\'a2
\par \'a6\'a2 \'a6\'a2
\par \'a6\'a2 ? SSH session to root@192.168.35.71 \'a6\'a2
\par \'a6\'a2 ? SSH compression : ? \'a6\'a2
\par \'a6\'a2 ? SSH-browser : ? \'a6\'a2
\par \'a6\'a2 ? X11-forwarding : ? (disabled or not supported by server) \'a6\'a2
\par \'a6\'a2 ? DISPLAY : 10.250.0.14:0.0 \'a6\'a2
\par \'a6\'a2 \'a6\'a2
\par \'a6\'a2 ? For more info, ctrl+click on help or visit our website \'a6\'a2
\par \'a6\'a6\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a5
\par
\par Authorized users only.All activity may be monitored and reported!
\par Last login: Tue Dec 17 14:26:14 2024 from 192.168.103.36
\par ? root@onetools-2 \f1\'ab\'f3 ~ \'ab\'f3 ls
\par 1.py all-gzip-image-list.txt go logs node-p.tar.gz port_linux_amd64 wdd
\par 2.sh Clash.Verge_2.0.1_x64-setup.exe image nethogs nohup.out Postman-win64.exe
\par all-cmii-image-list.txt cmii_third k0s node-p-0.8.7.tar.gz octopus_image test.sh
\par ? root@onetools-2 \'ab\'f3 ~ \'ab\'f3 cd image
\par ? root@onetools-2 \'ab\'f3 ~/image \'ab\'f3 ls
\par 2.sh
\par 'cmii-live-operator=v5.7.0=2024-12-11=206.tar.gz'
\par 'cmii-suav-supervision=6.1.1=2024-12-10=418.tar.gz'
\par 'cmii-uav-advanced5g=6.1.1=2024-12-11=915.tar.gz'
\par 'cmii-uav-bridge=5.7.0-xzga-121001=2024-12-10=210.tar.gz'
\par 'cmii-uav-bridge=5.7.0-xzga-1210=2024-12-10=522.tar.gz'
\par 'cmii-uav-cloud-live=5.7.0-szga=2024-12-13=816.tar.gz'
\par 'cmii-uav-device=5.6.0-szga-1212-arm=2024-12-12=354.tar.gz'
\par 'cmii-uav-device=5.6.0-szga-1216-arm=2024-12-16=178.tar.gz'
\par 'cmii-uav-device=6.1.0-szga-1210=2024-12-10=908.tar.gz'
\par 'cmii-uav-integration=6.1.0-xzga-1211=2024-12-11=277.tar.gz'
\par 'cmii-uav-integration=6.1.0-xzga-1212=2024-12-12=223.tar.gz'
\par 'cmii-uav-integration=6.1.0-xzga-1212=2024-12-12=770.tar.gz'
\par 'cmii-uav-mission=5.4.0-zyga-1216=2024-12-16=288.tar.gz'
\par 'cmii-uav-mission=6.1.1-xzga=2024-12-11=578.tar.gz'
\par 'cmii-uav-mqtthandler=6.1.0-1217-shbj-arm=2024-12-17=514.tar.gz'
\par 'cmii-uav-oauth=6.0.0-33992-120601=2024-12-13=980.tar.gz'
\par 'cmii-uav-platform=5.4.0-27971-zyga-1217=2024-12-17=845.tar.gz'
\par 'cmii-uav-platform=5.7.0-32124-121101-arm=2024-12-11=423.tar.gz'
\par 'cmii-uav-platform=5.7.0-32124-121201-arm=2024-12-12=915.tar.gz'
\par 'cmii-uav-platform=5.7.0-32124-121301-arm=2024-12-13=302.tar.gz'
\par 'cmii-uav-platform=5.7.0-32124-121601-arm=2024-12-16=674.tar.gz'
\par 'cmii-uav-platform=5.7.0-32124-121602-arm=2024-12-16=611.tar.gz'
\par 'cmii-uav-platform=5.7.0-32124-121701-arm=2024-12-17=867.tar.gz'
\par 'cmii-uav-platform=6.0.0-33992-121301=2024-12-13=435.tar.gz'
\par 'cmii-uav-platform=6.1.0-32124-shbj-1217-arm=2024-12-17=736.tar.gz'
\par 'cmii-uav-platform=6.1.0-33579-121110=2024-12-11=944.tar.gz'
\par 'cmii-uav-platform=6.1.0-33579-1211=2024-12-11=358.tar.gz'
\par 'cmii-uav-platform=6.1.1-33579=2024-12-10=863.tar.gz'
\par 'cmii-uav-platform=6.1.1-33579=2024-12-10=866.tar.gz'
\par 'cmii-uav-platform-share=6.1.1=2024-12-10=429.tar.gz'
\par 'cmii-uav-surveillance=5.6.0-szga-1211-arm=2024-12-11=866.tar.gz'
\par 'cmii-uav-surveillance=5.6.0-szga-1217-arm=2024-12-17=394.tar.gz'
\par 'cmii-uav-surveillance=5.7.0-xzga-121101=2024-12-11=439.tar.gz'
\par 'cmii-uav-surveillance=5.7.0-xzga-121101=2024-12-11=939.tar.gz'
\par 'cmii-zlm-oss-adaptor=v2.7.3=2024-12-11=473.tar.gz'
\par 'cmii-zlm-oss-adaptor=v2.7.3=2024-12-11=485.tar.gz'
\par 'cmlc-live=v2.7.3=2024-12-11=256.tar.gz'
\par 'cmlc-live=v2.7.3=2024-12-11=354.tar.gz'
\par download_and_compress.sh
\par image-clean.sh
\par image-sync.sh
\par kubectl-1.30.4-amd64
\par 'nginx=1.27.0=2024-12-11=538.tar.gz'
\par nohup.out
\par rke-1.30.4
\par yaml
\par ? root@onetools-2 \'ab\'f3 ~/image \'ab\'f3 bash image-sync.sh -h 172.26.0.31:8033 -u harbor.cdcyy.com.cn/cmii/cmii-admin-data:6.1.1
\par \'a1\'be\'c9\'cf\'b4\'ab\'a1\'bf - \'d0\'e8\'d2\'aa\'b4\'a6\'c0\'ed\'b5\'c4\'be\'b5\'cf\'f1\'c3\'fb\'b3\'c6\'ce\'aa => harbor.cdcyy.com.cn/cmii/cmii-admin-data:6.1.1
\par
\par \'a1\'be\'c9\'cf\'b4\'ab\'a1\'bf - \'bf\'aa\'ca\'bc\'cf\'c2\'d4\'d8\'be\'b5\'cf\'f1\'a3\'a1
\par
\par \'cf\'c2\'d4\'d8-\'be\'b5\'cf\'f1\'cf\'c2\'d4\'d8\'b3\'c9\'b9\'a6\'a3\'a1 => harbor.cdcyy.com.cn/cmii/cmii-admin-data:6.1.1
\par
\par \'a1\'be\'c9\'cf\'b4\'ab\'a1\'bf - \'bd\'ab\'d2\'aa\'b0\'d1\'be\'b5\'cf\'f1\'d1\'b9\'cb\'f5\'ce\'aa => cmii-admin-data=6.1.1=2024-12-18=140.tar.gz
\par \'a1\'be\'c9\'cf\'b4\'ab\'a1\'bf - \'d1\'b9\'cb\'f5\'b3\'c9\'b9\'a6 \'a3\'a1 cmii-admin-data=6.1.1=2024-12-18=140.tar.gz
\par
\par \'a1\'be\'c9\'cf\'b4\'ab\'a1\'bf - \'bf\'aa\'ca\'bc\'c9\'cf\'b4\'ab\'d6\'c1OSS\'d6\'d0!
\par ...4-12-18=140.tar.gz: 204.79 MiB / 204.79 MiB \'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5\'a9\'a5 24.61 MiB/s 8s\'a1\'be\'c9\'cf\'b4\'ab\'a1\'bf - \'c9\'cf\'b4\'abOSS\'b3\'c9\'b9\'a6 => [2024-12-18 09:13:52 CST] 205MiB STANDARD cmii-admin-data=6.1.1=2024-12-18=140.tar.gz
\par
\par \'a1\'be\'c9\'cf\'b4\'ab\'a1\'bf - \'c7\'eb\'d4\'da\'c4\'bf\'b1\'eaMaster\'d6\'f7\'bb\'fa\'d6\'b4\'d0\'d0\'c8\'e7\'cf\'c2\'c3\'fc\'c1\'ee \'a1\'fd\'a1\'fd\'a1\'fd\'a1\'fd\'a1\'fd\'a1\'fd
\par
\par
\par source <(curl -sL https://b2.107421.xyz/image-sync.sh) -d cmii-admin-data=6.1.1=2024-12-18=140.tar.gz
\par
\par
\par \'a1\'be\'b8\'fc\'d0\'c2\'a1\'bf - \'d2\'bb\'bc\'fc\'b8\'fc\'d0\'c2\'ce\'a2\'b7\'fe\'ce\'f1\'b5\'c4Tag\'a3\'ac\'c7\'eb\'d6\'b4\'d0\'d0\'c8\'e7\'cf\'c2\'c3\'fc\'c1\'ee \'a1\'fd\'a1\'fd\'a1\'fd\'a1\'fd\'a1\'fd\'a1\'fd
\par
\par wget https://oss.demo.uavcmlc.com/cmlc-installation/tmp/cmii-admin-data=6.1.1=2024-12-18=140.tar.gz && bash ./cmii-update.sh cmii-admin-data=6.1.1=2024-12-18=140.tar.gz
\par
\par
\par \'a1\'be\'c9\'cf\'b4\'ab\'a1\'bf - \'ca\'d6\'b6\'af\'c3\'fc\'c1\'ee\'d6\'b4\'d0\'d0\'c8\'e7\'cf\'c2\'a3\'ac \'c4\'bf\'b1\'ea\'be\'b5\'cf\'f1\'c8\'ab\'b3\'cc\'b5\'d8\'d6\'b7\'ce\'aa => 172.26.0.31:8033/cmii/cmii-admin-data:6.1.1
\par
\par wget https://oss.demo.uavcmlc.com/cmlc-installation/tmp/cmii-admin-data=6.1.1=2024-12-18=140.tar.gz && docker load < cmii-admin-data=6.1.1=2024-12-18=140.tar.gz && docker tag harbor.cdcyy.com.cn/cmii/cmii-admin-data:6.1.1 172.26.0.31:8033/cmii/cmii-admin-data:6.1.1 && docker push 172.26.0.31:8033/cmii/cmii-admin-data:6.1.1
\par
\par
\par ? root@onetools-2 \'ab\'f3 ~/image \'ab\'f3
\par Network error: Software caused connection abort
\par
\par \f0\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1\'a6\'a1
\par
\par Session stopped
\par - Press <return> to exit tab
\par - Press R to restart session
\par - Press S to save terminal output to file
\par
\par }

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,574 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-securityh5
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "securityh5",
AppClientId: "APP_N3ImO0Ubfu9peRHD"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-threedsimulation
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "threedsimulation",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-jiangsuwenlv
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "jiangsuwenlv",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-hljtt
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "hljtt",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-pangu
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-base
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "base",
AppClientId: "APP_9LY41OaKSqk2btY0"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-logistics
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "logistics",
AppClientId: "APP_PvdfRRRBPL8xbIwl"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-oms
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "oms",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-visualization
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "visualization",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-classification
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "classification",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-supervision
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "supervision",
AppClientId: "APP_qqSu82THfexI8PLM"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-armypeople
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "armypeople",
AppClientId: "APP_UIegse6Lfou9pO1U"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-open
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "open",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-qinghaitourism
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "qinghaitourism",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-smsecret
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "smsecret",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-mianyangbackend
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "mianyangbackend",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-media
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "media",
AppClientId: "APP_4AU8lbifESQO4FD6"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-mws
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "mws",
AppClientId: "APP_uKniXPELlRERBBwK"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-seniclive
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "seniclive",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-uasms
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "uasms",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-emergency
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "emergency",
AppClientId: "APP_aGsTAY1uMZrpKdfk"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-multiterminal
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "multiterminal",
AppClientId: "APP_PvdfRRRBPL8xbIwl"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-hyper
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "hyper",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-scanner
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "scanner",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-cmsportal
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "cmsportal",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-share
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "share",
AppClientId: "APP_4lVSVI0ZGxTssir8"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-uas
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "uas",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-dikongzhixingh5
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "dikongzhixingh5",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-supervisionh5
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "supervisionh5",
AppClientId: "APP_qqSu82THfexI8PLM"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-detection
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "detection",
AppClientId: "APP_FDHW2VLVDWPnnOCy"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-secenter
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "secenter",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-eventsh5
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "eventsh5",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-dispatchh5
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "dispatchh5",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-pilot2cloud
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "pilot2cloud",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-blockchain
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "blockchain",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-smauth
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "smauth",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-ai-brain
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "ai-brain",
AppClientId: "APP_rafnuCAmBESIVYMH"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-security
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "security",
AppClientId: "APP_JUSEMc7afyWXxvE7"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-traffic
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "traffic",
AppClientId: "APP_Jc8i2wOQ1t73QEJS"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-qingdao
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "qingdao",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-splice
namespace: gsyd-app
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "app",
CloudHOST: "117.156.17.88:8088",
ApplicationShortName: "splice",
AppClientId: "APP_zE0M3sTRXrCIJS8Y"
}

View File

@@ -0,0 +1,309 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 39999
selector:
k8s-app: kubernetes-dashboard
type: NodePort
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kube-system
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kube-system
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: kubernetes-dashboard
image: 10.215.66.85:8033/cmii/dashboard:v2.0.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kube-system
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kube-system
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: 10.215.66.85:8033/cmii/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

View File

@@ -0,0 +1,274 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm-emqxs
namespace: gsyd-app
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-emqxs-env
namespace: gsyd-app
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.1.1
data:
EMQX_CLUSTER__K8S__APISERVER: "https://kubernetes.default.svc.cluster.local:443"
EMQX_NAME: "helm-emqxs"
EMQX_CLUSTER__DISCOVERY: "k8s"
EMQX_CLUSTER__K8S__APP_NAME: "helm-emqxs"
EMQX_CLUSTER__K8S__SERVICE_NAME: "helm-emqxs-headless"
EMQX_CLUSTER__K8S__ADDRESS_TYPE: "dns"
EMQX_CLUSTER__K8S__namespace: "gsyd-app"
EMQX_CLUSTER__K8S__SUFFIX: "svc.cluster.local"
EMQX_ALLOW_ANONYMOUS: "false"
EMQX_ACL_NOMATCH: "deny"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-emqxs-cm
namespace: gsyd-app
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.1.1
data:
emqx_auth_mnesia.conf: |-
auth.mnesia.password_hash = sha256
# clientid 认证数据
# auth.client.1.clientid = admin
# auth.client.1.password = 4YPk*DS%+5
## username 认证数据
auth.user.1.username = admin
auth.user.1.password = odD8#Ve7.B
auth.user.2.username = cmlc
auth.user.2.password = odD8#Ve7.B
acl.conf: |-
{allow, {user, "admin"}, pubsub, ["admin/#"]}.
{allow, {user, "dashboard"}, subscribe, ["$SYS/#"]}.
{allow, {ipaddr, "127.0.0.1"}, pubsub, ["$SYS/#", "#"]}.
{deny, all, subscribe, ["$SYS/#", {eq, "#"}]}.
{allow, all}.
loaded_plugins: |-
{emqx_auth_mnesia,true}.
{emqx_auth_mnesia,true}.
{emqx_management, true}.
{emqx_recon, true}.
{emqx_retainer, false}.
{emqx_dashboard, true}.
{emqx_telemetry, true}.
{emqx_rule_engine, true}.
{emqx_bridge_mqtt, false}.
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-emqxs
namespace: gsyd-app
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.1.1
spec:
replicas: 1
serviceName: helm-emqxs-headless
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
template:
metadata:
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.1.1
spec:
affinity: {}
imagePullSecrets:
- name: harborsecret
serviceAccountName: helm-emqxs
containers:
- name: helm-emqxs
image: 10.215.66.85:8033/cmii/emqx:4.4.19
imagePullPolicy: Always
ports:
- name: mqtt
containerPort: 1883
- name: mqttssl
containerPort: 8883
- name: mgmt
containerPort: 8081
- name: ws
containerPort: 8083
- name: wss
containerPort: 8084
- name: dashboard
containerPort: 18083
- name: ekka
containerPort: 4370
envFrom:
- configMapRef:
name: helm-emqxs-env
resources: {}
volumeMounts:
- name: emqx-data
mountPath: "/opt/emqx/data/mnesia"
readOnly: false
- name: helm-emqxs-cm
mountPath: "/opt/emqx/etc/plugins/emqx_auth_mnesia.conf"
subPath: emqx_auth_mnesia.conf
readOnly: false
# - name: helm-emqxs-cm
# mountPath: "/opt/emqx/etc/acl.conf"
# subPath: "acl.conf"
# readOnly: false
- name: helm-emqxs-cm
mountPath: "/opt/emqx/data/loaded_plugins"
subPath: loaded_plugins
readOnly: false
volumes:
- name: emqx-data
persistentVolumeClaim:
claimName: helm-emqxs
- name: helm-emqxs-cm
configMap:
name: helm-emqxs-cm
items:
- key: emqx_auth_mnesia.conf
path: emqx_auth_mnesia.conf
- key: acl.conf
path: acl.conf
- key: loaded_plugins
path: loaded_plugins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: helm-emqxs
namespace: gsyd-app
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- watch
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: helm-emqxs
namespace: gsyd-app
subjects:
- kind: ServiceAccount
name: helm-emqxs
namespace: gsyd-app
roleRef:
kind: Role
name: helm-emqxs
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: helm-emqxs
namespace: gsyd-app
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.1.1
spec:
type: NodePort
selector:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
ports:
- port: 1883
name: mqtt
targetPort: 1883
nodePort: 31883
- port: 18083
name: dashboard
targetPort: 18083
nodePort: 38085
- port: 8083
name: mqtt-websocket
targetPort: 8083
nodePort: 38083
---
apiVersion: v1
kind: Service
metadata:
name: helm-emqxs-headless
namespace: gsyd-app
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.1.1
spec:
type: ClusterIP
clusterIP: None
selector:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
ports:
- name: mqtt
port: 1883
protocol: TCP
targetPort: 1883
- name: mqttssl
port: 8883
protocol: TCP
targetPort: 8883
- name: mgmt
port: 8081
protocol: TCP
targetPort: 8081
- name: websocket
port: 8083
protocol: TCP
targetPort: 8083
- name: wss
port: 8084
protocol: TCP
targetPort: 8084
- name: dashboard
port: 18083
protocol: TCP
targetPort: 18083
- name: ekka
port: 4370
protocol: TCP
targetPort: 4370

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,702 @@
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontend-applications-ingress
namespace: gsyd-app
labels:
type: frontend
octopus.control: all-ingress-config-wdd
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.1.1
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/supervision)$ $1/ redirect;
rewrite ^(/supervisionh5)$ $1/ redirect;
rewrite ^(/pangu)$ $1/ redirect;
rewrite ^(/ai-brain)$ $1/ redirect;
rewrite ^(/armypeople)$ $1/ redirect;
rewrite ^(/base)$ $1/ redirect;
rewrite ^(/blockchain)$ $1/ redirect;
rewrite ^(/classification)$ $1/ redirect;
rewrite ^(/cmsportal)$ $1/ redirect;
rewrite ^(/detection)$ $1/ redirect;
rewrite ^(/dikongzhixingh5)$ $1/ redirect;
rewrite ^(/dispatchh5)$ $1/ redirect;
rewrite ^(/emergency)$ $1/ redirect;
rewrite ^(/eventsh5)$ $1/ redirect;
rewrite ^(/hljtt)$ $1/ redirect;
rewrite ^(/hyper)$ $1/ redirect;
rewrite ^(/jiangsuwenlv)$ $1/ redirect;
rewrite ^(/logistics)$ $1/ redirect;
rewrite ^(/media)$ $1/ redirect;
rewrite ^(/mianyangbackend)$ $1/ redirect;
rewrite ^(/multiterminal)$ $1/ redirect;
rewrite ^(/mws)$ $1/ redirect;
rewrite ^(/oms)$ $1/ redirect;
rewrite ^(/open)$ $1/ redirect;
rewrite ^(/pilot2cloud)$ $1/ redirect;
rewrite ^(/qingdao)$ $1/ redirect;
rewrite ^(/qinghaitourism)$ $1/ redirect;
rewrite ^(/scanner)$ $1/ redirect;
rewrite ^(/security)$ $1/ redirect;
rewrite ^(/securityh5)$ $1/ redirect;
rewrite ^(/seniclive)$ $1/ redirect;
rewrite ^(/share)$ $1/ redirect;
rewrite ^(/smauth)$ $1/ redirect;
rewrite ^(/smsecret)$ $1/ redirect;
rewrite ^(/splice)$ $1/ redirect;
rewrite ^(/threedsimulation)$ $1/ redirect;
rewrite ^(/traffic)$ $1/ redirect;
rewrite ^(/uas)$ $1/ redirect;
rewrite ^(/uasms)$ $1/ redirect;
rewrite ^(/visualization)$ $1/ redirect;
rewrite ^(/secenter)$ $1/ redirect;
spec:
rules:
- host: fake-domain.gsyd-app.io
http:
paths:
- path: /app/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform
servicePort: 9528
- path: /app/supervision/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-suav-platform-supervision
servicePort: 9528
- path: /app/supervisionh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-suav-platform-supervisionh5
servicePort: 9528
- path: /app/pangu/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform
servicePort: 9528
- path: /app/ai-brain/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-ai-brain
servicePort: 9528
- path: /app/armypeople/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-armypeople
servicePort: 9528
- path: /app/base/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-base
servicePort: 9528
- path: /app/blockchain/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-blockchain
servicePort: 9528
- path: /app/classification/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-classification
servicePort: 9528
- path: /app/cmsportal/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-cms-portal
servicePort: 9528
- path: /app/detection/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-detection
servicePort: 9528
- path: /app/dikongzhixingh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-dikongzhixingh5
servicePort: 9528
- path: /app/dispatchh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-dispatchh5
servicePort: 9528
- path: /app/emergency/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-emergency-rescue
servicePort: 9528
- path: /app/eventsh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-eventsh5
servicePort: 9528
- path: /app/hljtt/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-hljtt
servicePort: 9528
- path: /app/hyper/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-hyperspectral
servicePort: 9528
- path: /app/jiangsuwenlv/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-jiangsuwenlv
servicePort: 9528
- path: /app/logistics/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-logistics
servicePort: 9528
- path: /app/media/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-media
servicePort: 9528
- path: /app/mianyangbackend/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-mianyangbackend
servicePort: 9528
- path: /app/multiterminal/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-multiterminal
servicePort: 9528
- path: /app/mws/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-mws
servicePort: 9528
- path: /app/oms/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-oms
servicePort: 9528
- path: /app/open/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-open
servicePort: 9528
- path: /app/pilot2cloud/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-pilot2-to-cloud
servicePort: 9528
- path: /app/qingdao/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-qingdao
servicePort: 9528
- path: /app/qinghaitourism/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-qinghaitourism
servicePort: 9528
- path: /app/scanner/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-scanner
servicePort: 9528
- path: /app/security/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-security
servicePort: 9528
- path: /app/securityh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-securityh5
servicePort: 9528
- path: /app/seniclive/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-seniclive
servicePort: 9528
- path: /app/share/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-share
servicePort: 9528
- path: /app/smauth/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-smauth
servicePort: 9528
- path: /app/smsecret/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-smsecret
servicePort: 9528
- path: /app/splice/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-splice
servicePort: 9528
- path: /app/threedsimulation/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-threedsimulation
servicePort: 9528
- path: /app/traffic/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-traffic
servicePort: 9528
- path: /app/uas/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-uas
servicePort: 9528
- path: /app/uasms/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-uasms
servicePort: 9528
- path: /app/visualization/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-visualization
servicePort: 9528
- path: /app/secenter/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uavms-platform-security-center
servicePort: 9528
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: backend-applications-ingress
namespace: gsyd-app
labels:
type: backend
octopus.control: all-ingress-config-wdd
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.1.1
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
spec:
rules:
- host: cmii-admin-data.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-admin-data
servicePort: 8080
- host: cmii-admin-gateway.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-admin-gateway
servicePort: 8080
- host: cmii-admin-user.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-admin-user
servicePort: 8080
- host: cmii-app-release.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-app-release
servicePort: 8080
- host: cmii-open-gateway.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-open-gateway
servicePort: 8080
- host: cmii-suav-supervision.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-suav-supervision
servicePort: 8080
- host: cmii-uas-gateway.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uas-gateway
servicePort: 8080
- host: cmii-uas-lifecycle.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uas-lifecycle
servicePort: 8080
- host: cmii-uav-airspace.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-airspace
servicePort: 8080
- host: cmii-uav-alarm.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-alarm
servicePort: 8080
- host: cmii-uav-autowaypoint.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-autowaypoint
servicePort: 8080
- host: cmii-uav-brain.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-brain
servicePort: 8080
- host: cmii-uav-bridge.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-bridge
servicePort: 8080
- host: cmii-uav-cloud-live.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-cloud-live
servicePort: 8080
- host: cmii-uav-clusters.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-clusters
servicePort: 8080
- host: cmii-uav-cms.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-cms
servicePort: 8080
- host: cmii-uav-data-post-process.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-data-post-process
servicePort: 8080
- host: cmii-uav-depotautoreturn.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-depotautoreturn
servicePort: 8080
- host: cmii-uav-developer.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-developer
servicePort: 8080
- host: cmii-uav-device.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-device
servicePort: 8080
- host: cmii-uav-emergency.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-emergency
servicePort: 8080
- host: cmii-uav-fwdd.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-fwdd
servicePort: 8080
- host: cmii-uav-gateway.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-gateway
servicePort: 8080
- host: cmii-uav-gis-server.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-gis-server
servicePort: 8080
- host: cmii-uav-grid-datasource.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-grid-datasource
servicePort: 8080
- host: cmii-uav-grid-engine.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-grid-engine
servicePort: 8080
- host: cmii-uav-grid-manage.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-grid-manage
servicePort: 8080
- host: cmii-uav-industrial-portfolio.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-industrial-portfolio
servicePort: 8080
- host: cmii-uav-integration.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-integration
servicePort: 8080
- host: cmii-uav-iot-dispatcher.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-iot-dispatcher
servicePort: 8080
- host: cmii-uav-kpi-monitor.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-kpi-monitor
servicePort: 8080
- host: cmii-uav-logger.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-logger
servicePort: 8080
- host: cmii-uav-material-warehouse.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-material-warehouse
servicePort: 8080
- host: cmii-uav-mission.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-mission
servicePort: 8080
- host: cmii-uav-mqtthandler.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-mqtthandler
servicePort: 8080
- host: cmii-uav-multilink.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-multilink
servicePort: 8080
- host: cmii-uav-notice.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-notice
servicePort: 8080
- host: cmii-uav-oauth.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-oauth
servicePort: 8080
- host: cmii-uav-process.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-process
servicePort: 8080
- host: cmii-uav-sense-adapter.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-sense-adapter
servicePort: 8080
- host: cmii-uav-surveillance.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-surveillance
servicePort: 8080
- host: cmii-uav-sync.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-sync
servicePort: 8080
- host: cmii-uav-threedsimulation.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-threedsimulation
servicePort: 8080
- host: cmii-uav-tower.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-tower
servicePort: 8080
- host: cmii-uav-user.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-user
servicePort: 8080
- host: cmii-uav-waypoint.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-waypoint
servicePort: 8080
- host: cmii-uavms-security-center.uavcloud-app.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uavms-security-center
servicePort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: all-gateways-ingress
namespace: gsyd-app
labels:
type: api-gateway
octopus.control: all-ingress-config-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.1.1
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
spec:
rules:
- host: fake-domain.gsyd-app.io
http:
paths:
- path: /app/oms/api/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-admin-gateway
servicePort: 8080
- path: /app/open/api/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-open-gateway
servicePort: 8080
- path: /app/api/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-gateway
servicePort: 8080

View File

@@ -0,0 +1,78 @@
---
apiVersion: v1
kind: Service
metadata:
name: helm-mongo
namespace: gsyd-app
labels:
cmii.app: helm-mongo
cmii.type: middleware
helm.sh/chart: mongo-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.1.1
spec:
type: NodePort
selector:
cmii.app: helm-mongo
cmii.type: middleware
ports:
- port: 27017
name: server-27017
targetPort: 27017
nodePort: 37017
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-mongo
namespace: gsyd-app
labels:
cmii.app: helm-mongo
cmii.type: middleware
helm.sh/chart: mongo-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.1.1
spec:
serviceName: helm-mongo
replicas: 1
selector:
matchLabels:
cmii.app: helm-mongo
cmii.type: middleware
template:
metadata:
labels:
cmii.app: helm-mongo
cmii.type: middleware
helm.sh/chart: mongo-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.1.1
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
imagePullSecrets:
- name: harborsecret
affinity: {}
containers:
- name: helm-mongo
image: 10.215.66.85:8033/cmii/mongo:5.0
resources: {}
ports:
- containerPort: 27017
name: mongo27017
protocol: TCP
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: cmlc
- name: MONGO_INITDB_ROOT_PASSWORD
value: REdPza8#oVlt
volumeMounts:
- name: mongo-data
mountPath: /data/db
readOnly: false
subPath: default/helm-mongo/data/db
volumes:
- name: mongo-data
persistentVolumeClaim:
claimName: helm-mongo
---

View File

@@ -0,0 +1,410 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm-mysql
namespace: gsyd-app
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
annotations: {}
secrets:
- name: helm-mysql
---
apiVersion: v1
kind: Secret
metadata:
name: helm-mysql
namespace: gsyd-app
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
type: Opaque
data:
mysql-root-password: "UXpmWFFoZDNiUQ=="
mysql-password: "S0F0cm5PckFKNw=="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-mysql
namespace: gsyd-app
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: primary
data:
my.cnf: |-
[mysqld]
port=3306
basedir=/opt/bitnami/mysql
datadir=/bitnami/mysql/data
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
socket=/opt/bitnami/mysql/tmp/mysql.sock
log-error=/bitnami/mysql/data/error.log
general_log_file = /bitnami/mysql/data/general.log
slow_query_log_file = /bitnami/mysql/data/slow.log
innodb_data_file_path = ibdata1:512M:autoextend
innodb_buffer_pool_size = 512M
innodb_buffer_pool_instances = 2
innodb_log_file_size = 512M
innodb_log_files_in_group = 4
innodb_log_files_in_group = 4
log-bin = /bitnami/mysql/data/mysql-bin
max_binlog_size=1G
transaction_isolation = REPEATABLE-READ
default_storage_engine = innodb
character-set-server = utf8mb4
collation-server=utf8mb4_bin
binlog_format = ROW
binlog_rows_query_log_events=on
binlog_cache_size=4M
binlog_expire_logs_seconds = 1296000
max_binlog_cache_size=2G
gtid_mode = on
enforce_gtid_consistency = 1
sync_binlog = 1
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT
log_slave_updates=1
relay_log_recovery = 1
relay-log-purge = 1
default_time_zone = '+08:00'
lower_case_table_names=1
log_bin_trust_function_creators=1
group_concat_max_len=67108864
innodb_io_capacity = 4000
innodb_io_capacity_max = 8000
innodb_flush_sync = 0
innodb_flush_neighbors = 0
innodb_write_io_threads = 8
innodb_read_io_threads = 8
innodb_purge_threads = 4
innodb_page_cleaners = 4
innodb_open_files = 65535
innodb_max_dirty_pages_pct = 50
innodb_lru_scan_depth = 4000
innodb_checksum_algorithm = crc32
innodb_lock_wait_timeout = 10
innodb_rollback_on_timeout = 1
innodb_print_all_deadlocks = 1
innodb_file_per_table = 1
innodb_online_alter_log_max_size = 4G
innodb_stats_on_metadata = 0
innodb_thread_concurrency = 0
innodb_sync_spin_loops = 100
innodb_spin_wait_delay = 30
lock_wait_timeout = 3600
slow_query_log = 1
long_query_time = 10
log_queries_not_using_indexes =1
log_throttle_queries_not_using_indexes = 60
min_examined_row_limit = 100
log_slow_admin_statements = 1
log_slow_slave_statements = 1
default_authentication_plugin=mysql_native_password
skip-name-resolve=1
explicit_defaults_for_timestamp=1
plugin_dir=/opt/bitnami/mysql/plugin
max_allowed_packet=128M
max_connections = 2000
max_connect_errors = 1000000
table_definition_cache=2000
table_open_cache_instances=64
tablespace_definition_cache=1024
thread_cache_size=256
interactive_timeout = 600
wait_timeout = 600
tmpdir=/opt/bitnami/mysql/tmp
max_allowed_packet=32M
bind-address=0.0.0.0
performance_schema = 1
performance_schema_instrument = '%memory%=on'
performance_schema_instrument = '%lock%=on'
innodb_monitor_enable=ALL
[mysql]
no-auto-rehash
[mysqldump]
quick
max_allowed_packet = 32M
[client]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
default-character-set=UTF8
plugin_dir=/opt/bitnami/mysql/plugin
[manager]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-mysql-init-scripts
namespace: gsyd-app
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: primary
data:
create_users_grants_core.sql: |-
create user zyly@'%' identified by 'Cmii@451315';
grant select on *.* to zyly@'%';
create user zyly_qc@'%' identified by 'Uh)E_owCyb16';
grant all on *.* to zyly_qc@'%';
create user k8s_admin@'%' identified by 'fP#UaH6qQ3)8';
grant all on *.* to k8s_admin@'%';
create user audit_dba@'%' identified by 'PjCzqiBmJaTpgkoYXynH';
grant all on *.* to audit_dba@'%';
create user db_backup@'%' identified by 'RU5Pu(4FGdT9';
GRANT SELECT, RELOAD, PROCESS, LOCK TABLES, REPLICATION CLIENT, EVENT on *.* to db_backup@'%';
create user monitor@'%' identified by 'PL3#nGtrWbf-';
grant REPLICATION CLIENT on *.* to monitor@'%';
flush privileges;
---
kind: Service
apiVersion: v1
metadata:
name: cmii-mysql
namespace: gsyd-app
labels:
app.kubernetes.io/component: primary
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: gsyd-app
cmii.app: mysql
cmii.type: middleware
octopus.control: mysql-db-wdd
spec:
ports:
- name: mysql
protocol: TCP
port: 13306
targetPort: mysql
selector:
app.kubernetes.io/component: primary
app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: gsyd-app
cmii.app: mysql
cmii.type: middleware
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: helm-mysql-headless
namespace: gsyd-app
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
annotations: {}
spec:
type: ClusterIP
clusterIP: None
publishNotReadyAddresses: true
ports:
- name: mysql
port: 3306
targetPort: mysql
selector:
app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: gsyd-app
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
---
apiVersion: v1
kind: Service
metadata:
name: helm-mysql
namespace: gsyd-app
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
annotations: {}
spec:
type: NodePort
ports:
- name: mysql
port: 3306
protocol: TCP
targetPort: mysql
nodePort: 33306
selector:
app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: gsyd-app
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-mysql
namespace: gsyd-app
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: gsyd-app
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
serviceName: helm-mysql
updateStrategy:
type: RollingUpdate
template:
metadata:
annotations:
checksum/configuration: 6b60fa0f3a846a6ada8effdc4f823cf8003d42a8c8f630fe8b1b66d3454082dd
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
spec:
imagePullSecrets:
- name: harborsecret
serviceAccountName: helm-mysql
affinity: {}
nodeSelector:
mysql-deploy: "true"
securityContext:
fsGroup: 1001
initContainers:
- name: change-volume-permissions
image: 10.215.66.85:8033/cmii/bitnami-shell:11-debian-11-r136
imagePullPolicy: "Always"
command:
- /bin/bash
- -ec
- |
chown -R 1001:1001 /bitnami/mysql
securityContext:
runAsUser: 0
volumeMounts:
- name: mysql-data
mountPath: /bitnami/mysql
containers:
- name: mysql
image: 10.215.66.85:8033/cmii/mysql:8.1.0-debian-11-r42
imagePullPolicy: "IfNotPresent"
securityContext:
runAsUser: 1001
env:
- name: BITNAMI_DEBUG
value: "true"
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: helm-mysql
key: mysql-root-password
- name: MYSQL_DATABASE
value: "cmii"
ports:
- name: mysql
containerPort: 3306
livenessProbe:
failureThreshold: 5
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
exec:
command:
- /bin/bash
- -ec
- |
password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
readinessProbe:
failureThreshold: 5
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
exec:
command:
- /bin/bash
- -ec
- |
password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
startupProbe:
failureThreshold: 60
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
exec:
command:
- /bin/bash
- -ec
- |
password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
resources:
limits: {}
requests: {}
volumeMounts:
- name: mysql-data
mountPath: /bitnami/mysql
- name: custom-init-scripts
mountPath: /docker-entrypoint-initdb.d
- name: config
mountPath: /opt/bitnami/mysql/conf/my.cnf
subPath: my.cnf
volumes:
- name: config
configMap:
name: helm-mysql
- name: custom-init-scripts
configMap:
name: helm-mysql-init-scripts
- name: mysql-data
hostPath:
path: /var/lib/docker/mysql-pv/gsyd-app/

View File

@@ -0,0 +1,130 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-nacos-cm
namespace: gsyd-app
labels:
cmii.app: helm-nacos
cmii.type: middleware
octopus.control: nacos-wdd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 6.1.1
data:
mysql.db.name: "cmii_nacos_config"
mysql.db.host: "helm-mysql"
mysql.port: "3306"
mysql.user: "k8s_admin"
mysql.password: "fP#UaH6qQ3)8"
---
apiVersion: v1
kind: Service
metadata:
name: helm-nacos
namespace: gsyd-app
labels:
cmii.app: helm-nacos
cmii.type: middleware
octopus.control: nacos-wdd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 6.1.1
spec:
type: NodePort
selector:
cmii.app: helm-nacos
cmii.type: middleware
ports:
- port: 8848
name: server
targetPort: 8848
nodePort: 38848
- port: 9848
name: server12
targetPort: 9848
- port: 9849
name: server23
targetPort: 9849
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-nacos
namespace: gsyd-app
labels:
cmii.app: helm-nacos
cmii.type: middleware
octopus.control: nacos-wdd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 6.1.1
spec:
serviceName: helm-nacos
replicas: 1
selector:
matchLabels:
cmii.app: helm-nacos
cmii.type: middleware
template:
metadata:
labels:
cmii.app: helm-nacos
cmii.type: middleware
octopus.control: nacos-wdd
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/version: 6.1.1
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
imagePullSecrets:
- name: harborsecret
affinity: {}
containers:
- name: nacos-server
image: 10.215.66.85:8033/cmii/nacos-server:v2.1.2
ports:
- containerPort: 8848
name: dashboard
- containerPort: 9848
name: tcp-9848
- containerPort: 9849
name: tcp-9849
env:
- name: NACOS_AUTH_ENABLE
value: "false"
- name: NACOS_REPLICAS
value: "1"
- name: MYSQL_SERVICE_DB_NAME
valueFrom:
configMapKeyRef:
name: helm-nacos-cm
key: mysql.db.name
- name: MYSQL_SERVICE_PORT
valueFrom:
configMapKeyRef:
name: helm-nacos-cm
key: mysql.port
- name: MYSQL_SERVICE_USER
valueFrom:
configMapKeyRef:
name: helm-nacos-cm
key: mysql.user
- name: MYSQL_SERVICE_PASSWORD
valueFrom:
configMapKeyRef:
name: helm-nacos-cm
key: mysql.password
- name: MYSQL_SERVICE_HOST
valueFrom:
configMapKeyRef:
name: helm-nacos-cm
key: mysql.db.host
- name: NACOS_SERVER_PORT
value: "8848"
- name: NACOS_APPLICATION_PORT
value: "8848"
- name: PREFER_HOST_MODE
value: "hostname"
- name: MODE
value: standalone
- name: SPRING_DATASOURCE_PLATFORM
value: mysql
---

View File

@@ -0,0 +1,38 @@
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-prod-distribute" #与nfs-StorageClass.yaml metadata.name保持一致
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-prod-distribute
resources:
requests:
storage: 1Mi
---
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: test-pod
image: 10.215.66.85:8033/cmii/busybox:latest
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/NFS-CREATE-SUCCESS && exit 0 || exit 1" #创建一个SUCCESS文件后退出
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim #与PVC名称保持一致

View File

@@ -0,0 +1,114 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system #根据实际环境设定namespace,下面类同
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
roleRef:
kind: ClusterRole
# name: nfs-client-provisioner-runner
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-prod-distribute
provisioner: cmlc-nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致parameters: archiveOnDelete: "false"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system #与RBAC文件中的namespace保持一致
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
imagePullSecrets:
- name: harborsecret
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: 10.215.66.85:8033/cmii/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: cmlc-nfs-storage
- name: NFS_SERVER
value: 10.215.66.89
- name: NFS_PATH
value: /var/lib/docker/nfs_data
volumes:
- name: nfs-client-root
nfs:
server: 10.215.66.89
path: /var/lib/docker/nfs_data

View File

@@ -3,12 +3,12 @@ apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: nfs-backend-log-pvc name: nfs-backend-log-pvc
namespace: uavcloud-devoperation namespace: gsyd-app
labels: labels:
cmii.type: middleware-base cmii.type: middleware-base
cmii.app: nfs-backend-log-pvc cmii.app: nfs-backend-log-pvc
helm.sh/chart: all-persistence-volume-claims-1.1.0 helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 6.1.1
spec: spec:
storageClassName: nfs-prod-distribute storageClassName: nfs-prod-distribute
accessModes: accessModes:
@@ -22,12 +22,12 @@ apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: helm-emqxs name: helm-emqxs
namespace: uavcloud-devoperation namespace: gsyd-app
labels: labels:
cmii.type: middleware-base cmii.type: middleware-base
cmii.app: helm-emqxs cmii.app: helm-emqxs
helm.sh/chart: all-persistence-volume-claims-1.1.0 helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 6.1.1
spec: spec:
storageClassName: nfs-prod-distribute storageClassName: nfs-prod-distribute
accessModes: accessModes:
@@ -41,12 +41,12 @@ apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: helm-mongo name: helm-mongo
namespace: uavcloud-devoperation namespace: gsyd-app
labels: labels:
cmii.type: middleware-base cmii.type: middleware-base
cmii.app: helm-mongo cmii.app: helm-mongo
helm.sh/chart: all-persistence-volume-claims-1.1.0 helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 6.1.1
spec: spec:
storageClassName: nfs-prod-distribute storageClassName: nfs-prod-distribute
accessModes: accessModes:
@@ -60,12 +60,12 @@ apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: helm-rabbitmq name: helm-rabbitmq
namespace: uavcloud-devoperation namespace: gsyd-app
labels: labels:
cmii.type: middleware-base cmii.type: middleware-base
cmii.app: helm-rabbitmq cmii.app: helm-rabbitmq
helm.sh/chart: all-persistence-volume-claims-1.1.0 helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 6.1.1
spec: spec:
storageClassName: nfs-prod-distribute storageClassName: nfs-prod-distribute
accessModes: accessModes:

View File

@@ -0,0 +1,328 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm-rabbitmq
namespace: gsyd-app
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: rabbitmq
automountServiceAccountToken: true
secrets:
- name: helm-rabbitmq
---
apiVersion: v1
kind: Secret
metadata:
name: helm-rabbitmq
namespace: gsyd-app
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: rabbitmq
type: Opaque
data:
rabbitmq-password: "blljUk45MXIuX2hq"
rabbitmq-erlang-cookie: "emFBRmt1ZU1xMkJieXZvdHRYbWpoWk52UThuVXFzcTU="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-rabbitmq-config
namespace: gsyd-app
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: rabbitmq
data:
rabbitmq.conf: |-
## Username and password
##
default_user = admin
default_pass = nYcRN91r._hj
## Clustering
##
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.node_cleanup.interval = 10
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
# queue master locator
queue_master_locator = min-masters
# enable guest user
loopback_users.guest = false
#default_vhost = default-vhost
#disk_free_limit.absolute = 50MB
#load_definitions = /app/load_definition.json
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: helm-rabbitmq-endpoint-reader
namespace: gsyd-app
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: rabbitmq
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: helm-rabbitmq-endpoint-reader
namespace: gsyd-app
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: rabbitmq
subjects:
- kind: ServiceAccount
name: helm-rabbitmq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: helm-rabbitmq-endpoint-reader
---
apiVersion: v1
kind: Service
metadata:
name: helm-rabbitmq-headless
namespace: gsyd-app
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: rabbitmq
spec:
clusterIP: None
ports:
- name: epmd
port: 4369
targetPort: epmd
- name: amqp
port: 5672
targetPort: amqp
- name: dist
port: 25672
targetPort: dist
- name: dashboard
port: 15672
targetPort: stats
selector:
app.kubernetes.io/name: helm-rabbitmq
app.kubernetes.io/release: gsyd-app
publishNotReadyAddresses: true
---
apiVersion: v1
kind: Service
metadata:
name: helm-rabbitmq
namespace: gsyd-app
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: rabbitmq
spec:
type: NodePort
ports:
- name: amqp
port: 5672
targetPort: amqp
nodePort: 35672
- name: dashboard
port: 15672
targetPort: dashboard
nodePort: 36675
selector:
app.kubernetes.io/name: helm-rabbitmq
app.kubernetes.io/release: gsyd-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-rabbitmq
namespace: gsyd-app
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: rabbitmq
spec:
serviceName: helm-rabbitmq-headless
podManagementPolicy: OrderedReady
replicas: 1
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: helm-rabbitmq
app.kubernetes.io/release: gsyd-app
template:
metadata:
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: rabbitmq
annotations:
checksum/config: d6c2caa9572f64a06d9f7daa34c664a186b4778cd1697ef8e59663152fc628f1
checksum/secret: d764e7b3d999e7324d1afdfec6140092a612f04b6e0306818675815cec2f454f
spec:
imagePullSecrets:
- name: harborsecret
serviceAccountName: helm-rabbitmq
affinity: {}
securityContext:
fsGroup: 5001
runAsUser: 5001
terminationGracePeriodSeconds: 120
initContainers:
- name: volume-permissions
image: 10.215.66.85:8033/cmii/bitnami-shell:11-debian-11-r136
imagePullPolicy: "Always"
command:
- /bin/bash
args:
- -ec
- |
mkdir -p "/bitnami/rabbitmq/mnesia"
chown -R "5001:5001" "/bitnami/rabbitmq/mnesia"
securityContext:
runAsUser: 0
resources:
limits: {}
requests: {}
volumeMounts:
- name: data
mountPath: /bitnami/rabbitmq/mnesia
containers:
- name: rabbitmq
image: 10.215.66.85:8033/cmii/rabbitmq:3.9.12-debian-10-r3
imagePullPolicy: "Always"
env:
- name: BITNAMI_DEBUG
value: "false"
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: K8S_SERVICE_NAME
value: "helm-rabbitmq-headless"
- name: K8S_ADDRESS_TYPE
value: hostname
- name: RABBITMQ_FORCE_BOOT
value: "no"
- name: RABBITMQ_NODE_NAME
value: "rabbit@$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local"
- name: K8S_HOSTNAME_SUFFIX
value: ".$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local"
- name: RABBITMQ_MNESIA_DIR
value: "/bitnami/rabbitmq/mnesia/$(RABBITMQ_NODE_NAME)"
- name: RABBITMQ_LDAP_ENABLE
value: "no"
- name: RABBITMQ_LOGS
value: "-"
- name: RABBITMQ_ULIMIT_NOFILES
value: "65536"
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_ERL_COOKIE
valueFrom:
secretKeyRef:
name: helm-rabbitmq
key: rabbitmq-erlang-cookie
- name: RABBITMQ_LOAD_DEFINITIONS
value: "no"
- name: RABBITMQ_SECURE_PASSWORD
value: "yes"
- name: RABBITMQ_USERNAME
value: "admin"
- name: RABBITMQ_PASSWORD
valueFrom:
secretKeyRef:
name: helm-rabbitmq
key: rabbitmq-password
- name: RABBITMQ_PLUGINS
value: "rabbitmq_management, rabbitmq_peer_discovery_k8s, rabbitmq_shovel, rabbitmq_shovel_management, rabbitmq_auth_backend_ldap"
ports:
- name: amqp
containerPort: 5672
- name: dist
containerPort: 25672
- name: dashboard
containerPort: 15672
- name: epmd
containerPort: 4369
livenessProbe:
exec:
command:
- /bin/bash
- -ec
- rabbitmq-diagnostics -q ping
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 20
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/bash
- -ec
- rabbitmq-diagnostics -q check_running && rabbitmq-diagnostics -q check_local_alarms
initialDelaySeconds: 10
periodSeconds: 30
timeoutSeconds: 20
successThreshold: 1
failureThreshold: 3
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -ec
- |
if [[ -f /opt/bitnami/scripts/rabbitmq/nodeshutdown.sh ]]; then
/opt/bitnami/scripts/rabbitmq/nodeshutdown.sh -t "120" -d "false"
else
rabbitmqctl stop_app
fi
resources:
limits: {}
requests: {}
volumeMounts:
- name: configuration
mountPath: /bitnami/rabbitmq/conf
- name: data
mountPath: /bitnami/rabbitmq/mnesia
volumes:
- name: configuration
configMap:
name: helm-rabbitmq-config
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- name: data
persistentVolumeClaim:
claimName: helm-rabbitmq

View File

@@ -0,0 +1,585 @@
---
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: true
metadata:
name: helm-redis
namespace: gsyd-app
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
---
apiVersion: v1
kind: Secret
metadata:
name: helm-redis
namespace: gsyd-app
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
type: Opaque
data:
redis-password: "TWNhY2hlQDQ1MjI="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-redis-configuration
namespace: gsyd-app
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
data:
redis.conf: |-
# User-supplied common configuration:
# Enable AOF https://redis.io/topics/persistence#append-only-file
appendonly yes
# Disable RDB persistence, AOF persistence already enabled.
save ""
# End of common configuration
master.conf: |-
dir /data
# User-supplied master configuration:
rename-command FLUSHDB ""
rename-command FLUSHALL ""
# End of master configuration
replica.conf: |-
dir /data
slave-read-only yes
# User-supplied replica configuration:
rename-command FLUSHDB ""
rename-command FLUSHALL ""
# End of replica configuration
---
# Source: outside-deploy/charts/redis-db/templates/health-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-redis-health
namespace: gsyd-app
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
data:
ping_readiness_local.sh: |-
#!/bin/bash
[[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
[[ -n "$REDIS_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_PASSWORD"
response=$(
timeout -s 3 $1 \
redis-cli \
-h localhost \
-p $REDIS_PORT \
ping
)
if [ "$response" != "PONG" ]; then
echo "$response"
exit 1
fi
ping_liveness_local.sh: |-
#!/bin/bash
[[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
[[ -n "$REDIS_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_PASSWORD"
response=$(
timeout -s 3 $1 \
redis-cli \
-h localhost \
-p $REDIS_PORT \
ping
)
if [ "$response" != "PONG" ] && [ "$response" != "LOADING Redis is loading the dataset in memory" ]; then
echo "$response"
exit 1
fi
ping_readiness_master.sh: |-
#!/bin/bash
[[ -f $REDIS_MASTER_PASSWORD_FILE ]] && export REDIS_MASTER_PASSWORD="$(< "${REDIS_MASTER_PASSWORD_FILE}")"
[[ -n "$REDIS_MASTER_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_MASTER_PASSWORD"
response=$(
timeout -s 3 $1 \
redis-cli \
-h $REDIS_MASTER_HOST \
-p $REDIS_MASTER_PORT_NUMBER \
ping
)
if [ "$response" != "PONG" ]; then
echo "$response"
exit 1
fi
ping_liveness_master.sh: |-
#!/bin/bash
[[ -f $REDIS_MASTER_PASSWORD_FILE ]] && export REDIS_MASTER_PASSWORD="$(< "${REDIS_MASTER_PASSWORD_FILE}")"
[[ -n "$REDIS_MASTER_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_MASTER_PASSWORD"
response=$(
timeout -s 3 $1 \
redis-cli \
-h $REDIS_MASTER_HOST \
-p $REDIS_MASTER_PORT_NUMBER \
ping
)
if [ "$response" != "PONG" ] && [ "$response" != "LOADING Redis is loading the dataset in memory" ]; then
echo "$response"
exit 1
fi
ping_readiness_local_and_master.sh: |-
script_dir="$(dirname "$0")"
exit_status=0
"$script_dir/ping_readiness_local.sh" $1 || exit_status=$?
"$script_dir/ping_readiness_master.sh" $1 || exit_status=$?
exit $exit_status
ping_liveness_local_and_master.sh: |-
script_dir="$(dirname "$0")"
exit_status=0
"$script_dir/ping_liveness_local.sh" $1 || exit_status=$?
"$script_dir/ping_liveness_master.sh" $1 || exit_status=$?
exit $exit_status
---
# Source: outside-deploy/charts/redis-db/templates/scripts-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-redis-scripts
namespace: gsyd-app
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
data:
start-master.sh: |
#!/bin/bash
[[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
if [[ ! -f /opt/bitnami/redis/etc/master.conf ]];then
cp /opt/bitnami/redis/mounted-etc/master.conf /opt/bitnami/redis/etc/master.conf
fi
if [[ ! -f /opt/bitnami/redis/etc/redis.conf ]];then
cp /opt/bitnami/redis/mounted-etc/redis.conf /opt/bitnami/redis/etc/redis.conf
fi
ARGS=("--port" "${REDIS_PORT}")
ARGS+=("--requirepass" "${REDIS_PASSWORD}")
ARGS+=("--masterauth" "${REDIS_PASSWORD}")
ARGS+=("--include" "/opt/bitnami/redis/etc/redis.conf")
ARGS+=("--include" "/opt/bitnami/redis/etc/master.conf")
exec redis-server "${ARGS[@]}"
start-replica.sh: |
#!/bin/bash
get_port() {
hostname="$1"
type="$2"
port_var=$(echo "${hostname^^}_SERVICE_PORT_$type" | sed "s/-/_/g")
port=${!port_var}
if [ -z "$port" ]; then
case $type in
"SENTINEL")
echo 26379
;;
"REDIS")
echo 6379
;;
esac
else
echo $port
fi
}
get_full_hostname() {
hostname="$1"
echo "${hostname}.${HEADLESS_SERVICE}"
}
REDISPORT=$(get_port "$HOSTNAME" "REDIS")
[[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
[[ -f $REDIS_MASTER_PASSWORD_FILE ]] && export REDIS_MASTER_PASSWORD="$(< "${REDIS_MASTER_PASSWORD_FILE}")"
if [[ ! -f /opt/bitnami/redis/etc/replica.conf ]];then
cp /opt/bitnami/redis/mounted-etc/replica.conf /opt/bitnami/redis/etc/replica.conf
fi
if [[ ! -f /opt/bitnami/redis/etc/redis.conf ]];then
cp /opt/bitnami/redis/mounted-etc/redis.conf /opt/bitnami/redis/etc/redis.conf
fi
echo "" >> /opt/bitnami/redis/etc/replica.conf
echo "replica-announce-port $REDISPORT" >> /opt/bitnami/redis/etc/replica.conf
echo "replica-announce-ip $(get_full_hostname "$HOSTNAME")" >> /opt/bitnami/redis/etc/replica.conf
ARGS=("--port" "${REDIS_PORT}")
ARGS+=("--slaveof" "${REDIS_MASTER_HOST}" "${REDIS_MASTER_PORT_NUMBER}")
ARGS+=("--requirepass" "${REDIS_PASSWORD}")
ARGS+=("--masterauth" "${REDIS_MASTER_PASSWORD}")
ARGS+=("--include" "/opt/bitnami/redis/etc/redis.conf")
ARGS+=("--include" "/opt/bitnami/redis/etc/replica.conf")
exec redis-server "${ARGS[@]}"
---
# Source: outside-deploy/charts/redis-db/templates/headless-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: helm-redis-headless
namespace: gsyd-app
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
spec:
type: ClusterIP
clusterIP: None
ports:
- name: tcp-redis
port: 6379
targetPort: redis
selector:
app.kubernetes.io/name: redis-db
app.kubernetes.io/release: gsyd-app
---
# Source: outside-deploy/charts/redis-db/templates/master/service.yaml
apiVersion: v1
kind: Service
metadata:
name: helm-redis-master
namespace: gsyd-app
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: redis
app.kubernetes.io/component: master
spec:
type: ClusterIP
ports:
- name: tcp-redis
port: 6379
targetPort: redis
nodePort: null
selector:
app.kubernetes.io/name: redis-db
app.kubernetes.io/release: gsyd-app
cmii.type: middleware
cmii.app: redis
app.kubernetes.io/component: master
---
# Source: outside-deploy/charts/redis-db/templates/replicas/service.yaml
apiVersion: v1
kind: Service
metadata:
name: helm-redis-replicas
namespace: gsyd-app
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: replica
spec:
type: ClusterIP
ports:
- name: tcp-redis
port: 6379
targetPort: redis
nodePort: null
selector:
app.kubernetes.io/name: redis-db
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/component: replica
---
# Source: outside-deploy/charts/redis-db/templates/master/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-redis-master
namespace: gsyd-app
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: redis
app.kubernetes.io/component: master
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: redis-db
app.kubernetes.io/release: gsyd-app
cmii.type: middleware
cmii.app: redis
app.kubernetes.io/component: master
serviceName: helm-redis-headless
updateStrategy:
rollingUpdate: {}
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: redis
app.kubernetes.io/component: master
annotations:
checksum/configmap: b64aa5db67e6e63811f3c1095b9fce34d83c86a471fccdda0e48eedb53a179b0
checksum/health: 6e0a6330e5ac63e565ae92af1444527d72d8897f91266f333555b3d323570623
checksum/scripts: b88df93710b7c42a76006e20218f05c6e500e6cc2affd4bb1985832f03166e98
checksum/secret: 43f1b0e20f9cb2de936bd182bc3683b720fc3cf4f4e76cb23c06a52398a50e8d
spec:
affinity: {}
securityContext:
fsGroup: 1001
serviceAccountName: helm-redis
imagePullSecrets:
- name: harborsecret
terminationGracePeriodSeconds: 30
containers:
- name: redis
image: 10.215.66.85:8033/cmii/redis:6.2.6-debian-10-r0
imagePullPolicy: "Always"
securityContext:
runAsUser: 1001
command:
- /bin/bash
args:
- -c
- /opt/bitnami/scripts/start-scripts/start-master.sh
env:
- name: BITNAMI_DEBUG
value: "false"
- name: REDIS_REPLICATION_MODE
value: master
- name: ALLOW_EMPTY_PASSWORD
value: "no"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: helm-redis
key: redis-password
- name: REDIS_TLS_ENABLED
value: "no"
- name: REDIS_PORT
value: "6379"
ports:
- name: redis
containerPort: 6379
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 6
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /health/ping_liveness_local.sh 5
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 2
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /health/ping_readiness_local.sh 1
resources:
limits:
cpu: "2"
memory: 8Gi
requests:
cpu: "2"
memory: 8Gi
volumeMounts:
- name: start-scripts
mountPath: /opt/bitnami/scripts/start-scripts
- name: health
mountPath: /health
- name: redis-data
mountPath: /data
subPath:
- name: config
mountPath: /opt/bitnami/redis/mounted-etc
- name: redis-tmp-conf
mountPath: /opt/bitnami/redis/etc/
- name: tmp
mountPath: /tmp
volumes:
- name: start-scripts
configMap:
name: helm-redis-scripts
defaultMode: 0755
- name: health
configMap:
name: helm-redis-health
defaultMode: 0755
- name: config
configMap:
name: helm-redis-configuration
- name: redis-tmp-conf
emptyDir: {}
- name: tmp
emptyDir: {}
- name: redis-data
emptyDir: {}
---
# Source: outside-deploy/charts/redis-db/templates/replicas/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-redis-replicas
namespace: gsyd-app
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: replica
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: redis-db
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/component: replica
serviceName: helm-redis-headless
updateStrategy:
rollingUpdate: {}
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: gsyd-app
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: replica
annotations:
checksum/configmap: b64aa5db67e6e63811f3c1095b9fce34d83c86a471fccdda0e48eedb53a179b0
checksum/health: 6e0a6330e5ac63e565ae92af1444527d72d8897f91266f333555b3d323570623
checksum/scripts: b88df93710b7c42a76006e20218f05c6e500e6cc2affd4bb1985832f03166e98
checksum/secret: 43f1b0e20f9cb2de936bd182bc3683b720fc3cf4f4e76cb23c06a52398a50e8d
spec:
imagePullSecrets:
- name: harborsecret
securityContext:
fsGroup: 1001
serviceAccountName: helm-redis
terminationGracePeriodSeconds: 30
containers:
- name: redis
image: 10.215.66.85:8033/cmii/redis:6.2.6-debian-10-r0
imagePullPolicy: "Always"
securityContext:
runAsUser: 1001
command:
- /bin/bash
args:
- -c
- /opt/bitnami/scripts/start-scripts/start-replica.sh
env:
- name: BITNAMI_DEBUG
value: "false"
- name: REDIS_REPLICATION_MODE
value: slave
- name: REDIS_MASTER_HOST
value: helm-redis-master-0.helm-redis-headless.gsyd-app.svc.cluster.local
- name: REDIS_MASTER_PORT_NUMBER
value: "6379"
- name: ALLOW_EMPTY_PASSWORD
value: "no"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: helm-redis
key: redis-password
- name: REDIS_MASTER_PASSWORD
valueFrom:
secretKeyRef:
name: helm-redis
key: redis-password
- name: REDIS_TLS_ENABLED
value: "no"
- name: REDIS_PORT
value: "6379"
ports:
- name: redis
containerPort: 6379
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 6
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /health/ping_liveness_local_and_master.sh 5
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 2
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /health/ping_readiness_local_and_master.sh 1
resources:
limits:
cpu: "2"
memory: 8Gi
requests:
cpu: "2"
memory: 8Gi
volumeMounts:
- name: start-scripts
mountPath: /opt/bitnami/scripts/start-scripts
- name: health
mountPath: /health
- name: redis-data
mountPath: /data
subPath:
- name: config
mountPath: /opt/bitnami/redis/mounted-etc
- name: redis-tmp-conf
mountPath: /opt/bitnami/redis/etc
volumes:
- name: start-scripts
configMap:
name: helm-redis-scripts
defaultMode: 0755
- name: health
configMap:
name: helm-redis-health
defaultMode: 0755
- name: config
configMap:
name: helm-redis-configuration
- name: redis-tmp-conf
emptyDir: {}
- name: redis-data
emptyDir: {}

View File

@@ -0,0 +1,496 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: helm-live-srs-cm
namespace: gsyd-app
labels:
cmii.app: live-srs
cmii.type: live
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
helm.sh/chart: cmlc-live-srs-rtc-2.0.0
data:
srs.rtc.conf: |-
listen 31935;
max_connections 4096;
srs_log_tank console;
srs_log_level info;
srs_log_file /home/srs.log;
daemon off;
http_api {
enabled on;
listen 1985;
crossdomain on;
}
stats {
network 0;
}
http_server {
enabled on;
listen 8080;
dir /home/hls;
}
srt_server {
enabled on;
listen 30556;
maxbw 1000000000;
connect_timeout 4000;
peerlatency 600;
recvlatency 600;
}
rtc_server {
enabled on;
listen 30090;
candidate $CANDIDATE;
}
vhost __defaultVhost__ {
http_hooks {
enabled on;
on_publish http://helm-live-op-svc-v2:8080/hooks/on_push;
}
http_remux {
enabled on;
}
rtc {
enabled on;
rtmp_to_rtc on;
rtc_to_rtmp on;
keep_bframe off;
}
tcp_nodelay on;
min_latency on;
play {
gop_cache off;
mw_latency 100;
mw_msgs 10;
}
publish {
firstpkt_timeout 8000;
normal_timeout 4000;
mr on;
}
dvr {
enabled off;
dvr_path /home/dvr/[app]/[stream]/[2006][01]/[timestamp].mp4;
dvr_plan session;
}
hls {
enabled on;
hls_path /home/hls;
hls_fragment 10;
hls_window 60;
hls_m3u8_file [app]/[stream].m3u8;
hls_ts_file [app]/[stream]/[2006][01][02]/[timestamp]-[duration].ts;
hls_cleanup on;
hls_entry_prefix http://117.156.17.88:8088;
}
}
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-srs-svc-exporter
namespace: gsyd-app
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- name: rtmp
protocol: TCP
port: 31935
targetPort: 31935
nodePort: 31935
- name: rtc
protocol: UDP
port: 30090
targetPort: 30090
nodePort: 30090
- name: rtc-tcp
protocol: TCP
port: 30090
targetPort: 30090
nodePort: 30090
- name: srt
protocol: UDP
port: 30556
targetPort: 30556
nodePort: 30556
- name: api
protocol: TCP
port: 1985
targetPort: 1985
nodePort: 30080
selector:
srs-role: rtc
type: NodePort
sessionAffinity: None
externalTrafficPolicy: Cluster
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-srs-svc
namespace: gsyd-app
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
- name: api
protocol: TCP
port: 1985
targetPort: 1985
selector:
srs-role: rtc
type: ClusterIP
sessionAffinity: None
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-srsrtc-svc
namespace: gsyd-app
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- name: rtmp
protocol: TCP
port: 31935
targetPort: 31935
selector:
srs-role: rtc
type: ClusterIP
sessionAffinity: None
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: helm-live-srs-rtc
namespace: gsyd-app
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
cmii.app: live-srs
cmii.type: live
helm.sh/chart: cmlc-live-srs-rtc-2.0.0
srs-role: rtc
spec:
replicas: 1
selector:
matchLabels:
srs-role: rtc
template:
metadata:
labels:
srs-role: rtc
spec:
volumes:
- name: srs-conf-file
configMap:
name: helm-live-srs-cm
items:
- key: srs.rtc.conf
path: docker.conf
defaultMode: 420
- name: srs-vol
emptyDir:
sizeLimit: 8Gi
containers:
- name: srs-rtc
image: 10.215.66.85:8033/cmii/srs:v5.0.195
ports:
- name: srs-rtmp
containerPort: 31935
protocol: TCP
- name: srs-api
containerPort: 1985
protocol: TCP
- name: srs-flv
containerPort: 8080
protocol: TCP
- name: srs-webrtc
containerPort: 30090
protocol: UDP
- name: srs-webrtc-tcp
containerPort: 30090
protocol: TCP
- name: srs-srt
containerPort: 30556
protocol: UDP
env:
- name: CANDIDATE
value: 117.156.17.88
resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: srs-conf-file
mountPath: /usr/local/srs/conf/docker.conf
subPath: docker.conf
- name: srs-vol
mountPath: /home/dvr
subPath: gsyd-app/helm-live/dvr
- name: srs-vol
mountPath: /home/hls
subPath: gsyd-app/helm-live/hls
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
- name: oss-adaptor
image: 10.215.66.85:8033/cmii/cmii-srs-oss-adaptor:2023-SA
env:
- name: OSS_ENDPOINT
value: 'http://10.215.66.89:9000'
- name: OSS_AK
value: cmii
- name: OSS_SK
value: 'B#923fC7mk'
- name: OSS_BUCKET
value: live-cluster-hls
- name: SRS_OP
value: 'http://helm-live-op-svc-v2:8080'
- name: MYSQL_ENDPOINT
value: 'helm-mysql:3306'
- name: MYSQL_USERNAME
value: k8s_admin
- name: MYSQL_PASSWORD
value: fP#UaH6qQ3)8
- name: MYSQL_DATABASE
value: cmii_live_srs_op
- name: MYSQL_TABLE
value: live_segment
- name: LOG_LEVEL
value: info
- name: OSS_META
value: 'yes'
resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: srs-vol
mountPath: /cmii/share/hls
subPath: gsyd-app/helm-live/hls
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
imagePullSecrets:
- name: harborsecret
affinity: {}
schedulerName: default-scheduler
serviceName: helm-live-srsrtc-svc
podManagementPolicy: OrderedReady
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
revisionHistoryLimit: 10
---
# live-srs部分
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: helm-live-op-v2
namespace: gsyd-app
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
cmii.app: live-engine
cmii.type: live
helm.sh/chart: cmlc-live-live-op-2.0.0
live-role: op-v2
spec:
replicas: 1
selector:
matchLabels:
live-role: op-v2
template:
metadata:
labels:
live-role: op-v2
spec:
volumes:
- name: srs-conf-file
configMap:
name: helm-live-op-cm-v2
items:
- key: live.op.conf
path: bootstrap.yaml
defaultMode: 420
containers:
- name: helm-live-op-v2
image: 10.215.66.85:8033/cmii/cmii-live-operator:5.2.0
ports:
- name: operator
containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 4800m
memory: 4Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: srs-conf-file
mountPath: /cmii/bootstrap.yaml
subPath: bootstrap.yaml
livenessProbe:
httpGet:
path: /cmii/health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /cmii/health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
imagePullSecrets:
- name: harborsecret
affinity: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-op-svc-v2
namespace: gsyd-app
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30333
selector:
live-role: op-v2
type: NodePort
sessionAffinity: None
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-op-svc
namespace: gsyd-app
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
selector:
live-role: op
type: ClusterIP
sessionAffinity: None
---
kind: ConfigMap
apiVersion: v1
metadata:
name: helm-live-op-cm-v2
namespace: gsyd-app
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
cmii.app: live-engine
cmii.type: live
data:
live.op.conf: |-
server:
port: 8080
spring:
main:
allow-bean-definition-overriding: true
allow-circular-references: true
application:
name: cmii-live-operator
platform:
info:
name: cmii-live-operator
description: cmii-live-operator
version: 6.1.1
scanPackage: com.cmii.live.op
cloud:
nacos:
config:
username: nacos
password: KingKong@95461234
server-addr: helm-nacos:8848
extension-configs:
- data-id: cmii-live-operator.yml
group: 6.1.1
refresh: true
shared-configs:
- data-id: cmii-backend-system.yml
group: 6.1.1
refresh: true
discovery:
enabled: false
live:
engine:
type: srs
endpoint: 'http://helm-live-srs-svc:1985'
proto:
rtmp: 'rtmp://117.156.17.88:31935'
rtsp: 'rtsp://117.156.17.88:30554'
srt: 'srt://117.156.17.88:30556'
flv: 'http://117.156.17.88:30500'
hls: 'http://117.156.17.88:30500'
rtc: 'webrtc://117.156.17.88:30080'
replay: 'https://117.156.17.88:30333'
minio:
endpoint: http://10.215.66.89:9000
access-key: cmii
secret-key: B#923fC7mk
bucket: live-cluster-hls

File diff suppressed because it is too large Load Diff

View File

@@ -16,42 +16,98 @@ data:
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-detection name: tenant-prefix-security
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "detection", ApplicationShortName: "security",
AppClientId: "APP_FDHW2VLVDWPnnOCy" AppClientId: "APP_JUSEMc7afyWXxvE7"
} }
--- ---
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-oms name: tenant-prefix-visualization
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "visualization",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-blockchain
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "blockchain",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-pangu
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "oms", ApplicationShortName: "",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-cmsportal
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "cmsportal",
AppClientId: "empty" AppClientId: "empty"
} }
--- ---
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-seniclive name: tenant-prefix-base
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "seniclive", ApplicationShortName: "base",
AppClientId: "APP_9LY41OaKSqk2btY0"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-threedsimulation
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "threedsimulation",
AppClientId: "empty" AppClientId: "empty"
} }
--- ---
@@ -86,57 +142,113 @@ data:
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-uasms name: tenant-prefix-multiterminal
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "multiterminal",
AppClientId: "APP_PvdfRRRBPL8xbIwl"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-traffic
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "traffic",
AppClientId: "APP_Jc8i2wOQ1t73QEJS"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-hljtt
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "hljtt",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-pilot2cloud
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "uasms", ApplicationShortName: "pilot2cloud",
AppClientId: "empty" AppClientId: "empty"
} }
--- ---
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-ai-brain name: tenant-prefix-classification
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "ai-brain", ApplicationShortName: "classification",
AppClientId: "APP_rafnuCAmBESIVYMH" AppClientId: "empty"
} }
--- ---
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-logistics name: tenant-prefix-secenter
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "logistics", ApplicationShortName: "secenter",
AppClientId: "APP_PvdfRRRBPL8xbIwl" AppClientId: "empty"
} }
--- ---
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-share name: tenant-prefix-emergency
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "share", ApplicationShortName: "emergency",
AppClientId: "APP_4lVSVI0ZGxTssir8" AppClientId: "APP_aGsTAY1uMZrpKdfk"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-seniclive
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "seniclive",
AppClientId: "empty"
} }
--- ---
kind: ConfigMap kind: ConfigMap
@@ -156,85 +268,85 @@ data:
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-splice name: tenant-prefix-smsecret
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "splice", ApplicationShortName: "smsecret",
AppClientId: "APP_zE0M3sTRXrCIJS8Y" AppClientId: "empty"
} }
--- ---
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-supervision name: tenant-prefix-smauth
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "supervision", ApplicationShortName: "smauth",
AppClientId: "APP_qqSu82THfexI8PLM" AppClientId: "empty"
} }
--- ---
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-pangu name: tenant-prefix-eventsh5
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "", ApplicationShortName: "eventsh5",
AppClientId: "empty" AppClientId: "empty"
} }
--- ---
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-armypeople name: tenant-prefix-mianyangbackend
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "armypeople", ApplicationShortName: "mianyangbackend",
AppClientId: "APP_UIegse6Lfou9pO1U" AppClientId: "empty"
} }
--- ---
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-base name: tenant-prefix-detection
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "base", ApplicationShortName: "detection",
AppClientId: "APP_9LY41OaKSqk2btY0" AppClientId: "APP_FDHW2VLVDWPnnOCy"
} }
--- ---
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-multiterminal name: tenant-prefix-oms
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "multiterminal", ApplicationShortName: "oms",
AppClientId: "APP_PvdfRRRBPL8xbIwl" AppClientId: "empty"
} }
--- ---
kind: ConfigMap kind: ConfigMap
@@ -254,113 +366,29 @@ data:
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-securityh5 name: tenant-prefix-splice
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "securityh5", ApplicationShortName: "splice",
AppClientId: "APP_N3ImO0Ubfu9peRHD" AppClientId: "APP_zE0M3sTRXrCIJS8Y"
} }
--- ---
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-jiangsuwenlv name: tenant-prefix-ai-brain
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "jiangsuwenlv", ApplicationShortName: "ai-brain",
AppClientId: "empty" AppClientId: "APP_rafnuCAmBESIVYMH"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-security
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "security",
AppClientId: "APP_JUSEMc7afyWXxvE7"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-threedsimulation
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "threedsimulation",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-hljtt
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "hljtt",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-visualization
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "visualization",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-cmsportal
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "cmsportal",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-emergency
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "emergency",
AppClientId: "APP_aGsTAY1uMZrpKdfk"
} }
--- ---
kind: ConfigMap kind: ConfigMap
@@ -373,8 +401,106 @@ data:
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "media", ApplicationShortName: "media",
AppClientId: "APP_4AU8lbifESQO4FD6" AppClientId: "APP_4AU8lbifESQO4FD6"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-securityh5
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "securityh5",
AppClientId: "APP_N3ImO0Ubfu9peRHD"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-share
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "share",
AppClientId: "APP_4lVSVI0ZGxTssir8"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-jiangsuwenlv
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "jiangsuwenlv",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-scanner
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "scanner",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-supervision
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "supervision",
AppClientId: "APP_qqSu82THfexI8PLM"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-armypeople
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "armypeople",
AppClientId: "APP_UIegse6Lfou9pO1U"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-logistics
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "logistics",
AppClientId: "APP_PvdfRRRBPL8xbIwl"
} }
--- ---
kind: ConfigMap kind: ConfigMap
@@ -394,15 +520,15 @@ data:
kind: ConfigMap kind: ConfigMap
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: tenant-prefix-traffic name: tenant-prefix-uasms
namespace: jsntejpt namespace: jsntejpt
data: data:
ingress-config.js: |- ingress-config.js: |-
var __GlobalIngressConfig = { var __GlobalIngressConfig = {
TenantEnvironment: "", TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088", CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "traffic", ApplicationShortName: "uasms",
AppClientId: "APP_Jc8i2wOQ1t73QEJS" AppClientId: "empty"
} }
--- ---
kind: ConfigMap kind: ConfigMap
@@ -418,3 +544,31 @@ data:
ApplicationShortName: "dispatchh5", ApplicationShortName: "dispatchh5",
AppClientId: "empty" AppClientId: "empty"
} }
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-hyper
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "hyper",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-dikongzhixingh5
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "dikongzhixingh5",
AppClientId: "empty"
}

View File

@@ -16,7 +16,7 @@ metadata:
cmii.emqx.architecture: cluster cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0 helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
data: data:
EMQX_CLUSTER__K8S__APISERVER: "https://kubernetes.default.svc.cluster.local:443" EMQX_CLUSTER__K8S__APISERVER: "https://kubernetes.default.svc.cluster.local:443"
EMQX_NAME: "helm-emqxs" EMQX_NAME: "helm-emqxs"
@@ -40,7 +40,7 @@ metadata:
cmii.emqx.architecture: cluster cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0 helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
data: data:
emqx_auth_mnesia.conf: |- emqx_auth_mnesia.conf: |-
auth.mnesia.password_hash = sha256 auth.mnesia.password_hash = sha256
@@ -84,7 +84,7 @@ metadata:
cmii.emqx.architecture: cluster cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0 helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
spec: spec:
replicas: 1 replicas: 1
serviceName: helm-emqxs-headless serviceName: helm-emqxs-headless
@@ -103,7 +103,7 @@ spec:
cmii.emqx.architecture: cluster cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0 helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
spec: spec:
affinity: { } affinity: { }
imagePullSecrets: imagePullSecrets:
@@ -111,7 +111,7 @@ spec:
serviceAccountName: helm-emqxs serviceAccountName: helm-emqxs
containers: containers:
- name: helm-emqxs - name: helm-emqxs
image: 10.40.51.5:8033/cmii/emqx:4.4.9 image: 10.40.51.5:8033/cmii/emqx:4.4.19
imagePullPolicy: Always imagePullPolicy: Always
ports: ports:
- name: mqtt - name: mqtt
@@ -203,7 +203,7 @@ metadata:
cmii.emqx.architecture: cluster cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0 helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
spec: spec:
type: NodePort type: NodePort
selector: selector:
@@ -235,7 +235,7 @@ metadata:
cmii.emqx.architecture: cluster cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0 helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
spec: spec:
type: ClusterIP type: ClusterIP
clusterIP: None clusterIP: None

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,7 @@ metadata:
type: frontend type: frontend
octopus.control: all-ingress-config-wdd octopus.control: all-ingress-config-wdd
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
annotations: annotations:
kubernetes.io/ingress.class: "nginx" kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true" nginx.ingress.kubernetes.io/enable-cors: "true"
@@ -20,30 +20,41 @@ metadata:
rewrite ^(/ai-brain)$ $1/ redirect; rewrite ^(/ai-brain)$ $1/ redirect;
rewrite ^(/armypeople)$ $1/ redirect; rewrite ^(/armypeople)$ $1/ redirect;
rewrite ^(/base)$ $1/ redirect; rewrite ^(/base)$ $1/ redirect;
rewrite ^(/blockchain)$ $1/ redirect;
rewrite ^(/classification)$ $1/ redirect;
rewrite ^(/cmsportal)$ $1/ redirect; rewrite ^(/cmsportal)$ $1/ redirect;
rewrite ^(/detection)$ $1/ redirect; rewrite ^(/detection)$ $1/ redirect;
rewrite ^(/dikongzhixingh5)$ $1/ redirect;
rewrite ^(/dispatchh5)$ $1/ redirect; rewrite ^(/dispatchh5)$ $1/ redirect;
rewrite ^(/emergency)$ $1/ redirect; rewrite ^(/emergency)$ $1/ redirect;
rewrite ^(/eventsh5)$ $1/ redirect;
rewrite ^(/hljtt)$ $1/ redirect; rewrite ^(/hljtt)$ $1/ redirect;
rewrite ^(/hyper)$ $1/ redirect;
rewrite ^(/jiangsuwenlv)$ $1/ redirect; rewrite ^(/jiangsuwenlv)$ $1/ redirect;
rewrite ^(/logistics)$ $1/ redirect; rewrite ^(/logistics)$ $1/ redirect;
rewrite ^(/media)$ $1/ redirect; rewrite ^(/media)$ $1/ redirect;
rewrite ^(/mianyangbackend)$ $1/ redirect;
rewrite ^(/multiterminal)$ $1/ redirect; rewrite ^(/multiterminal)$ $1/ redirect;
rewrite ^(/mws)$ $1/ redirect; rewrite ^(/mws)$ $1/ redirect;
rewrite ^(/oms)$ $1/ redirect; rewrite ^(/oms)$ $1/ redirect;
rewrite ^(/open)$ $1/ redirect; rewrite ^(/open)$ $1/ redirect;
rewrite ^(/pilot2cloud)$ $1/ redirect;
rewrite ^(/qingdao)$ $1/ redirect; rewrite ^(/qingdao)$ $1/ redirect;
rewrite ^(/qinghaitourism)$ $1/ redirect; rewrite ^(/qinghaitourism)$ $1/ redirect;
rewrite ^(/scanner)$ $1/ redirect;
rewrite ^(/security)$ $1/ redirect; rewrite ^(/security)$ $1/ redirect;
rewrite ^(/securityh5)$ $1/ redirect; rewrite ^(/securityh5)$ $1/ redirect;
rewrite ^(/seniclive)$ $1/ redirect; rewrite ^(/seniclive)$ $1/ redirect;
rewrite ^(/share)$ $1/ redirect; rewrite ^(/share)$ $1/ redirect;
rewrite ^(/smauth)$ $1/ redirect;
rewrite ^(/smsecret)$ $1/ redirect;
rewrite ^(/splice)$ $1/ redirect; rewrite ^(/splice)$ $1/ redirect;
rewrite ^(/threedsimulation)$ $1/ redirect; rewrite ^(/threedsimulation)$ $1/ redirect;
rewrite ^(/traffic)$ $1/ redirect; rewrite ^(/traffic)$ $1/ redirect;
rewrite ^(/uas)$ $1/ redirect; rewrite ^(/uas)$ $1/ redirect;
rewrite ^(/uasms)$ $1/ redirect; rewrite ^(/uasms)$ $1/ redirect;
rewrite ^(/visualization)$ $1/ redirect; rewrite ^(/visualization)$ $1/ redirect;
rewrite ^(/secenter)$ $1/ redirect;
spec: spec:
rules: rules:
- host: fake-domain.jsntejpt.io - host: fake-domain.jsntejpt.io
@@ -84,6 +95,16 @@ spec:
backend: backend:
serviceName: cmii-uav-platform-base serviceName: cmii-uav-platform-base
servicePort: 9528 servicePort: 9528
- path: /blockchain/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-blockchain
servicePort: 9528
- path: /classification/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-classification
servicePort: 9528
- path: /cmsportal/?(.*) - path: /cmsportal/?(.*)
pathType: ImplementationSpecific pathType: ImplementationSpecific
backend: backend:
@@ -94,6 +115,11 @@ spec:
backend: backend:
serviceName: cmii-uav-platform-detection serviceName: cmii-uav-platform-detection
servicePort: 9528 servicePort: 9528
- path: /dikongzhixingh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-dikongzhixingh5
servicePort: 9528
- path: /dispatchh5/?(.*) - path: /dispatchh5/?(.*)
pathType: ImplementationSpecific pathType: ImplementationSpecific
backend: backend:
@@ -104,11 +130,21 @@ spec:
backend: backend:
serviceName: cmii-uav-platform-emergency-rescue serviceName: cmii-uav-platform-emergency-rescue
servicePort: 9528 servicePort: 9528
- path: /eventsh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-eventsh5
servicePort: 9528
- path: /hljtt/?(.*) - path: /hljtt/?(.*)
pathType: ImplementationSpecific pathType: ImplementationSpecific
backend: backend:
serviceName: cmii-uav-platform-hljtt serviceName: cmii-uav-platform-hljtt
servicePort: 9528 servicePort: 9528
- path: /hyper/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-hyperspectral
servicePort: 9528
- path: /jiangsuwenlv/?(.*) - path: /jiangsuwenlv/?(.*)
pathType: ImplementationSpecific pathType: ImplementationSpecific
backend: backend:
@@ -124,6 +160,11 @@ spec:
backend: backend:
serviceName: cmii-uav-platform-media serviceName: cmii-uav-platform-media
servicePort: 9528 servicePort: 9528
- path: /mianyangbackend/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-mianyangbackend
servicePort: 9528
- path: /multiterminal/?(.*) - path: /multiterminal/?(.*)
pathType: ImplementationSpecific pathType: ImplementationSpecific
backend: backend:
@@ -144,6 +185,11 @@ spec:
backend: backend:
serviceName: cmii-uav-platform-open serviceName: cmii-uav-platform-open
servicePort: 9528 servicePort: 9528
- path: /pilot2cloud/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-pilot2-to-cloud
servicePort: 9528
- path: /qingdao/?(.*) - path: /qingdao/?(.*)
pathType: ImplementationSpecific pathType: ImplementationSpecific
backend: backend:
@@ -154,6 +200,11 @@ spec:
backend: backend:
serviceName: cmii-uav-platform-qinghaitourism serviceName: cmii-uav-platform-qinghaitourism
servicePort: 9528 servicePort: 9528
- path: /scanner/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-scanner
servicePort: 9528
- path: /security/?(.*) - path: /security/?(.*)
pathType: ImplementationSpecific pathType: ImplementationSpecific
backend: backend:
@@ -174,6 +225,16 @@ spec:
backend: backend:
serviceName: cmii-uav-platform-share serviceName: cmii-uav-platform-share
servicePort: 9528 servicePort: 9528
- path: /smauth/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-smauth
servicePort: 9528
- path: /smsecret/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-smsecret
servicePort: 9528
- path: /splice/?(.*) - path: /splice/?(.*)
pathType: ImplementationSpecific pathType: ImplementationSpecific
backend: backend:
@@ -204,6 +265,11 @@ spec:
backend: backend:
serviceName: cmii-uav-platform-visualization serviceName: cmii-uav-platform-visualization
servicePort: 9528 servicePort: 9528
- path: /secenter/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uavms-platform-security-center
servicePort: 9528
--- ---
apiVersion: networking.k8s.io/v1beta1 apiVersion: networking.k8s.io/v1beta1
kind: Ingress kind: Ingress
@@ -214,7 +280,7 @@ metadata:
type: backend type: backend
octopus.control: all-ingress-config-wdd octopus.control: all-ingress-config-wdd
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
annotations: annotations:
kubernetes.io/ingress.class: "nginx" kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true" nginx.ingress.kubernetes.io/enable-cors: "true"
@@ -284,6 +350,14 @@ spec:
backend: backend:
serviceName: cmii-uas-lifecycle serviceName: cmii-uas-lifecycle
servicePort: 8080 servicePort: 8080
- host: cmii-uav-advanced5g.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-advanced5g
servicePort: 8080
- host: cmii-uav-airspace.uavcloud-jsntejpt.io - host: cmii-uav-airspace.uavcloud-jsntejpt.io
http: http:
paths: paths:
@@ -388,6 +462,14 @@ spec:
backend: backend:
serviceName: cmii-uav-emergency serviceName: cmii-uav-emergency
servicePort: 8080 servicePort: 8080
- host: cmii-uav-fwdd.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-fwdd
servicePort: 8080
- host: cmii-uav-gateway.uavcloud-jsntejpt.io - host: cmii-uav-gateway.uavcloud-jsntejpt.io
http: http:
paths: paths:
@@ -444,6 +526,14 @@ spec:
backend: backend:
serviceName: cmii-uav-integration serviceName: cmii-uav-integration
servicePort: 8080 servicePort: 8080
- host: cmii-uav-iot-dispatcher.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-iot-dispatcher
servicePort: 8080
- host: cmii-uav-kpi-monitor.uavcloud-jsntejpt.io - host: cmii-uav-kpi-monitor.uavcloud-jsntejpt.io
http: http:
paths: paths:
@@ -532,6 +622,14 @@ spec:
backend: backend:
serviceName: cmii-uav-surveillance serviceName: cmii-uav-surveillance
servicePort: 8080 servicePort: 8080
- host: cmii-uav-sync.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-sync
servicePort: 8080
- host: cmii-uav-threedsimulation.uavcloud-jsntejpt.io - host: cmii-uav-threedsimulation.uavcloud-jsntejpt.io
http: http:
paths: paths:
@@ -564,6 +662,14 @@ spec:
backend: backend:
serviceName: cmii-uav-waypoint serviceName: cmii-uav-waypoint
servicePort: 8080 servicePort: 8080
- host: cmii-uavms-security-center.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uavms-security-center
servicePort: 8080
--- ---
apiVersion: networking.k8s.io/v1beta1 apiVersion: networking.k8s.io/v1beta1
kind: Ingress kind: Ingress
@@ -574,7 +680,7 @@ metadata:
type: api-gateway type: api-gateway
octopus.control: all-ingress-config-1.1.0 octopus.control: all-ingress-config-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
annotations: annotations:
kubernetes.io/ingress.class: "nginx" kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true" nginx.ingress.kubernetes.io/enable-cors: "true"

View File

@@ -9,7 +9,7 @@ metadata:
cmii.type: middleware cmii.type: middleware
helm.sh/chart: mongo-1.1.0 helm.sh/chart: mongo-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
spec: spec:
type: NodePort type: NodePort
selector: selector:
@@ -31,7 +31,7 @@ metadata:
cmii.type: middleware cmii.type: middleware
helm.sh/chart: mongo-1.1.0 helm.sh/chart: mongo-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
spec: spec:
serviceName: helm-mongo serviceName: helm-mongo
replicas: 1 replicas: 1
@@ -46,7 +46,7 @@ spec:
cmii.type: middleware cmii.type: middleware
helm.sh/chart: mongo-1.1.0 helm.sh/chart: mongo-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
annotations: annotations:
pod.alpha.kubernetes.io/initialized: "true" pod.alpha.kubernetes.io/initialized: "true"
spec: spec:

View File

@@ -9,7 +9,7 @@ metadata:
cmii.type: middleware cmii.type: middleware
octopus.control: nacos-wdd octopus.control: nacos-wdd
app.kubernetes.io/managed-by: Helm app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
data: data:
mysql.db.name: "cmii_nacos_config" mysql.db.name: "cmii_nacos_config"
mysql.db.host: "helm-mysql" mysql.db.host: "helm-mysql"
@@ -27,7 +27,7 @@ metadata:
cmii.type: middleware cmii.type: middleware
octopus.control: nacos-wdd octopus.control: nacos-wdd
app.kubernetes.io/managed-by: Helm app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
spec: spec:
type: NodePort type: NodePort
selector: selector:
@@ -55,7 +55,7 @@ metadata:
cmii.type: middleware cmii.type: middleware
octopus.control: nacos-wdd octopus.control: nacos-wdd
app.kubernetes.io/managed-by: Helm app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
spec: spec:
serviceName: helm-nacos serviceName: helm-nacos
replicas: 1 replicas: 1
@@ -70,7 +70,7 @@ spec:
cmii.type: middleware cmii.type: middleware
octopus.control: nacos-wdd octopus.control: nacos-wdd
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
annotations: annotations:
pod.alpha.kubernetes.io/initialized: "true" pod.alpha.kubernetes.io/initialized: "true"
spec: spec:

View File

@@ -8,7 +8,7 @@ metadata:
cmii.type: middleware-base cmii.type: middleware-base
cmii.app: nfs-backend-log-pvc cmii.app: nfs-backend-log-pvc
helm.sh/chart: all-persistence-volume-claims-1.1.0 helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
spec: spec:
storageClassName: nfs-prod-distribute storageClassName: nfs-prod-distribute
accessModes: accessModes:
@@ -27,7 +27,7 @@ metadata:
cmii.type: middleware-base cmii.type: middleware-base
cmii.app: helm-emqxs cmii.app: helm-emqxs
helm.sh/chart: all-persistence-volume-claims-1.1.0 helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
spec: spec:
storageClassName: nfs-prod-distribute storageClassName: nfs-prod-distribute
accessModes: accessModes:
@@ -46,7 +46,7 @@ metadata:
cmii.type: middleware-base cmii.type: middleware-base
cmii.app: helm-mongo cmii.app: helm-mongo
helm.sh/chart: all-persistence-volume-claims-1.1.0 helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
spec: spec:
storageClassName: nfs-prod-distribute storageClassName: nfs-prod-distribute
accessModes: accessModes:
@@ -65,7 +65,7 @@ metadata:
cmii.type: middleware-base cmii.type: middleware-base
cmii.app: helm-rabbitmq cmii.app: helm-rabbitmq
helm.sh/chart: all-persistence-volume-claims-1.1.0 helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 5.7.0 app.kubernetes.io/version: 6.1.0
spec: spec:
storageClassName: nfs-prod-distribute storageClassName: nfs-prod-distribute
accessModes: accessModes:

View File

@@ -408,8 +408,8 @@ spec:
cpu: "2" cpu: "2"
memory: 8Gi memory: 8Gi
requests: requests:
cpu: "100m" cpu: "2"
memory: 1Gi memory: 8Gi
volumeMounts: volumeMounts:
- name: start-scripts - name: start-scripts
mountPath: /opt/bitnami/scripts/start-scripts mountPath: /opt/bitnami/scripts/start-scripts
@@ -552,8 +552,8 @@ spec:
cpu: "2" cpu: "2"
memory: 8Gi memory: 8Gi
requests: requests:
cpu: "100m" cpu: "2"
memory: 1Gi memory: 8Gi
volumeMounts: volumeMounts:
- name: start-scripts - name: start-scripts
mountPath: /opt/bitnami/scripts/start-scripts mountPath: /opt/bitnami/scripts/start-scripts

View File

@@ -99,8 +99,8 @@ spec:
ports: ports:
- name: rtmp - name: rtmp
protocol: TCP protocol: TCP
port: 30935 port: 31935
targetPort: 30935 targetPort: 31935
nodePort: 31935 nodePort: 31935
- name: rtc - name: rtc
protocol: UDP protocol: UDP
@@ -165,8 +165,8 @@ spec:
ports: ports:
- name: rtmp - name: rtmp
protocol: TCP protocol: TCP
port: 30935 port: 31935
targetPort: 30935 targetPort: 31935
selector: selector:
srs-role: rtc srs-role: rtc
type: ClusterIP type: ClusterIP
@@ -211,7 +211,7 @@ spec:
image: 10.40.51.5:8033/cmii/srs:v5.0.195 image: 10.40.51.5:8033/cmii/srs:v5.0.195
ports: ports:
- name: srs-rtmp - name: srs-rtmp
containerPort: 30935 containerPort: 31935
protocol: TCP protocol: TCP
- name: srs-api - name: srs-api
containerPort: 1985 containerPort: 1985
@@ -458,21 +458,21 @@ data:
info: info:
name: cmii-live-operator name: cmii-live-operator
description: cmii-live-operator description: cmii-live-operator
version: 5.7.0 version: 6.1.0
scanPackage: com.cmii.live.op scanPackage: com.cmii.live.op
cloud: cloud:
nacos: nacos:
config: config:
username: developer username: nacos
password: N@cos14Good password: KingKong@95461234
server-addr: helm-nacos:8848 server-addr: helm-nacos:8848
extension-configs: extension-configs:
- data-id: cmii-live-operator.yml - data-id: cmii-live-operator.yml
group: 5.7.0 group: 6.1.0
refresh: true refresh: true
shared-configs: shared-configs:
- data-id: cmii-backend-system.yml - data-id: cmii-backend-system.yml
group: 5.7.0 group: 6.1.0
refresh: true refresh: true
discovery: discovery:
enabled: false enabled: false
@@ -487,7 +487,7 @@ data:
srt: 'srt://10.40.51.5:30556' srt: 'srt://10.40.51.5:30556'
flv: 'http://10.40.51.5:30500' flv: 'http://10.40.51.5:30500'
hls: 'http://10.40.51.5:30500' hls: 'http://10.40.51.5:30500'
rtc: 'webrtc://10.40.51.5:30090' rtc: 'webrtc://10.40.51.5:30080'
replay: 'https://10.40.51.5:30333' replay: 'https://10.40.51.5:30333'
minio: minio:
endpoint: http://10.40.51.5:9000 endpoint: http://10.40.51.5:9000

View File

@@ -1,79 +0,0 @@
package jsntejpt
var AllCmiiImageList = []string{
"harbor.cdcyy.com.cn/cmii/cmii-uas-gateway:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-depotautoreturn:5.5.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-gateway:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-open-gateway:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-brain:5.5.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-surveillance:5.7.0-29766-0815",
"harbor.cdcyy.com.cn/cmii/cmii-uav-gis-server:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-clusters:5.2.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-engine:5.1.0",
"harbor.cdcyy.com.cn/cmii/cmii-iam-gateway:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uas-lifecycle:5.7.0-30403",
"harbor.cdcyy.com.cn/cmii/cmii-uav-tower:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-material-warehouse:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-sense-adapter:5.7.0-0805",
"harbor.cdcyy.com.cn/cmii/cmii-uav-threedsimulation:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-manage:5.1.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-user:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-grid-datasource:5.2.0-24810",
"harbor.cdcyy.com.cn/cmii/cmii-uav-airspace:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-autowaypoint:4.2.0-beta",
"harbor.cdcyy.com.cn/cmii/cmii-uav-logger:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-mission:5.7.0-29766-0819",
"harbor.cdcyy.com.cn/cmii/cmii-admin-data:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-integration:5.7.0-hw-080201",
"harbor.cdcyy.com.cn/cmii/cmii-uav-oauth:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-admin-gateway:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-app-release:4.2.0-validation",
"harbor.cdcyy.com.cn/cmii/cmii-uav-data-post-process:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-alarm:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-waypoint:5.7.0-0814",
"harbor.cdcyy.com.cn/cmii/cmii-uav-industrial-portfolio:5.7.0-31369-yunnan-082001",
"harbor.cdcyy.com.cn/cmii/cmii-uav-mqtthandler:5.7.0-29766-0815",
"harbor.cdcyy.com.cn/cmii/cmii-uav-notice:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-cloud-live:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-developer:5.7.0-0725",
"harbor.cdcyy.com.cn/cmii/cmii-uav-multilink:5.5.0",
"harbor.cdcyy.com.cn/cmii/cmii-suav-supervision:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-cms:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-process:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-admin-user:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-device:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-emergency:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-kpi-monitor:5.5.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-splice:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-ai-brain:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-visualization:5.2.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-emergency-rescue:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-dispatchh5:5.6.0-0708",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-seniclive:5.2.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-base:5.4.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-oms:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-suav-platform-supervision:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-jiangsuwenlv:4.1.3-jiangsu-0427",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-security:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-armypeople:5.7.0-0820",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-cms-portal:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-detection:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-mws:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-suav-platform-supervisionh5:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-share:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-open:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-qinghaitourism:4.1.0-21377-0508",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-threedsimulation:5.2.0-21392",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-uasms:5.7.0-29322",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-media:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-multiterminal:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-securityh5:5.7.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform:5.7.0-29267-0820",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-logistics:5.6.0",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-hljtt:5.3.0-hjltt",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-uas:5.7.0-29322",
"harbor.cdcyy.com.cn/cmii/cmii-uav-platform-qingdao:5.7.0-29766-0815",
"harbor.cdcyy.com.cn/cmii/cmii-srs-oss-adaptor:2023-SA",
"harbor.cdcyy.com.cn/cmii/ossrs/srs:v5.0.195",
"harbor.cdcyy.com.cn/cmii/cmii-live-operator:5.2.0",
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,420 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-supervisionh5
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "supervisionh5",
AppClientId: "APP_qqSu82THfexI8PLM"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-detection
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "detection",
AppClientId: "APP_FDHW2VLVDWPnnOCy"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-oms
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "oms",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-seniclive
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "seniclive",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-qinghaitourism
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "qinghaitourism",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-qingdao
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "qingdao",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-uasms
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "uasms",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-ai-brain
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "ai-brain",
AppClientId: "APP_rafnuCAmBESIVYMH"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-logistics
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "logistics",
AppClientId: "APP_PvdfRRRBPL8xbIwl"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-share
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "share",
AppClientId: "APP_4lVSVI0ZGxTssir8"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-uas
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "uas",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-splice
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "splice",
AppClientId: "APP_zE0M3sTRXrCIJS8Y"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-supervision
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "supervision",
AppClientId: "APP_qqSu82THfexI8PLM"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-pangu
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-armypeople
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "armypeople",
AppClientId: "APP_UIegse6Lfou9pO1U"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-base
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "base",
AppClientId: "APP_9LY41OaKSqk2btY0"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-multiterminal
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "multiterminal",
AppClientId: "APP_PvdfRRRBPL8xbIwl"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-open
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "open",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-securityh5
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "securityh5",
AppClientId: "APP_N3ImO0Ubfu9peRHD"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-jiangsuwenlv
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "jiangsuwenlv",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-security
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "security",
AppClientId: "APP_JUSEMc7afyWXxvE7"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-threedsimulation
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "threedsimulation",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-hljtt
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "hljtt",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-visualization
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "visualization",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-cmsportal
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "cmsportal",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-emergency
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "emergency",
AppClientId: "APP_aGsTAY1uMZrpKdfk"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-media
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "media",
AppClientId: "APP_4AU8lbifESQO4FD6"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-mws
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "mws",
AppClientId: "APP_uKniXPELlRERBBwK"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-traffic
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "traffic",
AppClientId: "APP_Jc8i2wOQ1t73QEJS"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-dispatchh5
namespace: jsntejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "10.40.51.5:8088",
ApplicationShortName: "dispatchh5",
AppClientId: "empty"
}

View File

@@ -0,0 +1,309 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 39999
selector:
k8s-app: kubernetes-dashboard
type: NodePort
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kube-system
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kube-system
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [ "" ]
resources: [ "secrets" ]
resourceNames: [ "kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf" ]
verbs: [ "get", "update", "delete" ]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [ "" ]
resources: [ "configmaps" ]
resourceNames: [ "kubernetes-dashboard-settings" ]
verbs: [ "get", "update" ]
# Allow Dashboard to get metrics.
- apiGroups: [ "" ]
resources: [ "services" ]
resourceNames: [ "heapster", "dashboard-metrics-scraper" ]
verbs: [ "proxy" ]
- apiGroups: [ "" ]
resources: [ "services/proxy" ]
resourceNames: [ "heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper" ]
verbs: [ "get" ]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: [ "metrics.k8s.io" ]
resources: [ "pods", "nodes" ]
verbs: [ "get", "list", "watch" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: kubernetes-dashboard
image: 10.40.51.5:8033/cmii/dashboard:v2.0.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kube-system
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: { }
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kube-system
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: 10.40.51.5:8033/cmii/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: { }
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

View File

@@ -0,0 +1,274 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm-emqxs
namespace: jsntejpt
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-emqxs-env
namespace: jsntejpt
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0
data:
EMQX_CLUSTER__K8S__APISERVER: "https://kubernetes.default.svc.cluster.local:443"
EMQX_NAME: "helm-emqxs"
EMQX_CLUSTER__DISCOVERY: "k8s"
EMQX_CLUSTER__K8S__APP_NAME: "helm-emqxs"
EMQX_CLUSTER__K8S__SERVICE_NAME: "helm-emqxs-headless"
EMQX_CLUSTER__K8S__ADDRESS_TYPE: "dns"
EMQX_CLUSTER__K8S__namespace: "jsntejpt"
EMQX_CLUSTER__K8S__SUFFIX: "svc.cluster.local"
EMQX_ALLOW_ANONYMOUS: "false"
EMQX_ACL_NOMATCH: "deny"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-emqxs-cm
namespace: jsntejpt
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0
data:
emqx_auth_mnesia.conf: |-
auth.mnesia.password_hash = sha256
# clientid 认证数据
# auth.client.1.clientid = admin
# auth.client.1.password = 4YPk*DS%+5
## username 认证数据
auth.user.1.username = admin
auth.user.1.password = odD8#Ve7.B
auth.user.2.username = cmlc
auth.user.2.password = odD8#Ve7.B
acl.conf: |-
{allow, {user, "admin"}, pubsub, ["admin/#"]}.
{allow, {user, "dashboard"}, subscribe, ["$SYS/#"]}.
{allow, {ipaddr, "127.0.0.1"}, pubsub, ["$SYS/#", "#"]}.
{deny, all, subscribe, ["$SYS/#", {eq, "#"}]}.
{allow, all}.
loaded_plugins: |-
{emqx_auth_mnesia,true}.
{emqx_auth_mnesia,true}.
{emqx_management, true}.
{emqx_recon, true}.
{emqx_retainer, false}.
{emqx_dashboard, true}.
{emqx_telemetry, true}.
{emqx_rule_engine, true}.
{emqx_bridge_mqtt, false}.
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-emqxs
namespace: jsntejpt
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0
spec:
replicas: 1
serviceName: helm-emqxs-headless
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
template:
metadata:
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0
spec:
affinity: { }
imagePullSecrets:
- name: harborsecret
serviceAccountName: helm-emqxs
containers:
- name: helm-emqxs
image: 10.40.51.5:8033/cmii/emqx:4.4.9
imagePullPolicy: Always
ports:
- name: mqtt
containerPort: 1883
- name: mqttssl
containerPort: 8883
- name: mgmt
containerPort: 8081
- name: ws
containerPort: 8083
- name: wss
containerPort: 8084
- name: dashboard
containerPort: 18083
- name: ekka
containerPort: 4370
envFrom:
- configMapRef:
name: helm-emqxs-env
resources: { }
volumeMounts:
- name: emqx-data
mountPath: "/opt/emqx/data/mnesia"
readOnly: false
- name: helm-emqxs-cm
mountPath: "/opt/emqx/etc/plugins/emqx_auth_mnesia.conf"
subPath: emqx_auth_mnesia.conf
readOnly: false
# - name: helm-emqxs-cm
# mountPath: "/opt/emqx/etc/acl.conf"
# subPath: "acl.conf"
# readOnly: false
- name: helm-emqxs-cm
mountPath: "/opt/emqx/data/loaded_plugins"
subPath: loaded_plugins
readOnly: false
volumes:
- name: emqx-data
persistentVolumeClaim:
claimName: helm-emqxs
- name: helm-emqxs-cm
configMap:
name: helm-emqxs-cm
items:
- key: emqx_auth_mnesia.conf
path: emqx_auth_mnesia.conf
- key: acl.conf
path: acl.conf
- key: loaded_plugins
path: loaded_plugins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: helm-emqxs
namespace: jsntejpt
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- watch
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: helm-emqxs
namespace: jsntejpt
subjects:
- kind: ServiceAccount
name: helm-emqxs
namespace: jsntejpt
roleRef:
kind: Role
name: helm-emqxs
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: helm-emqxs
namespace: jsntejpt
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0
spec:
type: NodePort
selector:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
ports:
- port: 1883
name: mqtt
targetPort: 1883
nodePort: 31883
- port: 18083
name: dashboard
targetPort: 18083
nodePort: 38085
- port: 8083
name: mqtt-websocket
targetPort: 8083
nodePort: 38083
---
apiVersion: v1
kind: Service
metadata:
name: helm-emqxs-headless
namespace: jsntejpt
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0
spec:
type: ClusterIP
clusterIP: None
selector:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
ports:
- name: mqtt
port: 1883
protocol: TCP
targetPort: 1883
- name: mqttssl
port: 8883
protocol: TCP
targetPort: 8883
- name: mgmt
port: 8081
protocol: TCP
targetPort: 8081
- name: websocket
port: 8083
protocol: TCP
targetPort: 8083
- name: wss
port: 8084
protocol: TCP
targetPort: 8084
- name: dashboard
port: 18083
protocol: TCP
targetPort: 18083
- name: ekka
port: 4370
protocol: TCP
targetPort: 4370

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,604 @@
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontend-applications-ingress
namespace: jsntejpt
labels:
type: frontend
octopus.control: all-ingress-config-wdd
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/supervision)$ $1/ redirect;
rewrite ^(/supervisionh5)$ $1/ redirect;
rewrite ^(/pangu)$ $1/ redirect;
rewrite ^(/ai-brain)$ $1/ redirect;
rewrite ^(/armypeople)$ $1/ redirect;
rewrite ^(/base)$ $1/ redirect;
rewrite ^(/cmsportal)$ $1/ redirect;
rewrite ^(/detection)$ $1/ redirect;
rewrite ^(/dispatchh5)$ $1/ redirect;
rewrite ^(/emergency)$ $1/ redirect;
rewrite ^(/hljtt)$ $1/ redirect;
rewrite ^(/jiangsuwenlv)$ $1/ redirect;
rewrite ^(/logistics)$ $1/ redirect;
rewrite ^(/media)$ $1/ redirect;
rewrite ^(/multiterminal)$ $1/ redirect;
rewrite ^(/mws)$ $1/ redirect;
rewrite ^(/oms)$ $1/ redirect;
rewrite ^(/open)$ $1/ redirect;
rewrite ^(/qingdao)$ $1/ redirect;
rewrite ^(/qinghaitourism)$ $1/ redirect;
rewrite ^(/security)$ $1/ redirect;
rewrite ^(/securityh5)$ $1/ redirect;
rewrite ^(/seniclive)$ $1/ redirect;
rewrite ^(/share)$ $1/ redirect;
rewrite ^(/splice)$ $1/ redirect;
rewrite ^(/threedsimulation)$ $1/ redirect;
rewrite ^(/traffic)$ $1/ redirect;
rewrite ^(/uas)$ $1/ redirect;
rewrite ^(/uasms)$ $1/ redirect;
rewrite ^(/visualization)$ $1/ redirect;
spec:
rules:
- host: fake-domain.jsntejpt.io
http:
paths:
- path: /?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform
servicePort: 9528
- path: /supervision/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-suav-platform-supervision
servicePort: 9528
- path: /supervisionh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-suav-platform-supervisionh5
servicePort: 9528
- path: /pangu/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform
servicePort: 9528
- path: /ai-brain/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-ai-brain
servicePort: 9528
- path: /armypeople/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-armypeople
servicePort: 9528
- path: /base/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-base
servicePort: 9528
- path: /cmsportal/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-cms-portal
servicePort: 9528
- path: /detection/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-detection
servicePort: 9528
- path: /dispatchh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-dispatchh5
servicePort: 9528
- path: /emergency/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-emergency-rescue
servicePort: 9528
- path: /hljtt/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-hljtt
servicePort: 9528
- path: /jiangsuwenlv/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-jiangsuwenlv
servicePort: 9528
- path: /logistics/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-logistics
servicePort: 9528
- path: /media/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-media
servicePort: 9528
- path: /multiterminal/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-multiterminal
servicePort: 9528
- path: /mws/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-mws
servicePort: 9528
- path: /oms/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-oms
servicePort: 9528
- path: /open/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-open
servicePort: 9528
- path: /qingdao/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-qingdao
servicePort: 9528
- path: /qinghaitourism/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-qinghaitourism
servicePort: 9528
- path: /security/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-security
servicePort: 9528
- path: /securityh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-securityh5
servicePort: 9528
- path: /seniclive/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-seniclive
servicePort: 9528
- path: /share/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-share
servicePort: 9528
- path: /splice/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-splice
servicePort: 9528
- path: /threedsimulation/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-threedsimulation
servicePort: 9528
- path: /traffic/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-traffic
servicePort: 9528
- path: /uas/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-uas
servicePort: 9528
- path: /uasms/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-uasms
servicePort: 9528
- path: /visualization/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-visualization
servicePort: 9528
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: backend-applications-ingress
namespace: jsntejpt
labels:
type: backend
octopus.control: all-ingress-config-wdd
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
spec:
rules:
- host: cmii-admin-data.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-admin-data
servicePort: 8080
- host: cmii-admin-gateway.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-admin-gateway
servicePort: 8080
- host: cmii-admin-user.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-admin-user
servicePort: 8080
- host: cmii-app-release.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-app-release
servicePort: 8080
- host: cmii-open-gateway.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-open-gateway
servicePort: 8080
- host: cmii-suav-supervision.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-suav-supervision
servicePort: 8080
- host: cmii-uas-gateway.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uas-gateway
servicePort: 8080
- host: cmii-uas-lifecycle.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uas-lifecycle
servicePort: 8080
- host: cmii-uav-airspace.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-airspace
servicePort: 8080
- host: cmii-uav-alarm.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-alarm
servicePort: 8080
- host: cmii-uav-autowaypoint.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-autowaypoint
servicePort: 8080
- host: cmii-uav-brain.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-brain
servicePort: 8080
- host: cmii-uav-bridge.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-bridge
servicePort: 8080
- host: cmii-uav-cloud-live.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-cloud-live
servicePort: 8080
- host: cmii-uav-clusters.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-clusters
servicePort: 8080
- host: cmii-uav-cms.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-cms
servicePort: 8080
- host: cmii-uav-data-post-process.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-data-post-process
servicePort: 8080
- host: cmii-uav-depotautoreturn.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-depotautoreturn
servicePort: 8080
- host: cmii-uav-developer.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-developer
servicePort: 8080
- host: cmii-uav-device.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-device
servicePort: 8080
- host: cmii-uav-emergency.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-emergency
servicePort: 8080
- host: cmii-uav-gateway.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-gateway
servicePort: 8080
- host: cmii-uav-gis-server.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-gis-server
servicePort: 8080
- host: cmii-uav-grid-datasource.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-grid-datasource
servicePort: 8080
- host: cmii-uav-grid-engine.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-grid-engine
servicePort: 8080
- host: cmii-uav-grid-manage.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-grid-manage
servicePort: 8080
- host: cmii-uav-industrial-portfolio.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-industrial-portfolio
servicePort: 8080
- host: cmii-uav-integration.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-integration
servicePort: 8080
- host: cmii-uav-kpi-monitor.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-kpi-monitor
servicePort: 8080
- host: cmii-uav-logger.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-logger
servicePort: 8080
- host: cmii-uav-material-warehouse.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-material-warehouse
servicePort: 8080
- host: cmii-uav-mission.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-mission
servicePort: 8080
- host: cmii-uav-mqtthandler.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-mqtthandler
servicePort: 8080
- host: cmii-uav-multilink.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-multilink
servicePort: 8080
- host: cmii-uav-notice.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-notice
servicePort: 8080
- host: cmii-uav-oauth.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-oauth
servicePort: 8080
- host: cmii-uav-process.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-process
servicePort: 8080
- host: cmii-uav-sense-adapter.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-sense-adapter
servicePort: 8080
- host: cmii-uav-surveillance.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-surveillance
servicePort: 8080
- host: cmii-uav-threedsimulation.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-threedsimulation
servicePort: 8080
- host: cmii-uav-tower.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-tower
servicePort: 8080
- host: cmii-uav-user.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-user
servicePort: 8080
- host: cmii-uav-waypoint.uavcloud-jsntejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-waypoint
servicePort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: all-gateways-ingress
namespace: jsntejpt
labels:
type: api-gateway
octopus.control: all-ingress-config-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.7.0
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
spec:
rules:
- host: fake-domain.jsntejpt.io
http:
paths:
- path: /oms/api/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-admin-gateway
servicePort: 8080
- path: /open/api/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-open-gateway
servicePort: 8080
- path: /api/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-gateway
servicePort: 8080

View File

@@ -3,15 +3,15 @@ apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: helm-mongo name: helm-mongo
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
cmii.app: helm-mongo cmii.app: helm-mongo
cmii.type: middleware cmii.type: middleware
helm.sh/chart: mongo-1.1.0 helm.sh/chart: mongo-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 5.7.0
spec: spec:
type: ClusterIP type: NodePort
selector: selector:
cmii.app: helm-mongo cmii.app: helm-mongo
cmii.type: middleware cmii.type: middleware
@@ -19,18 +19,19 @@ spec:
- port: 27017 - port: 27017
name: server-27017 name: server-27017
targetPort: 27017 targetPort: 27017
nodePort: 37017
--- ---
apiVersion: apps/v1 apiVersion: apps/v1
kind: StatefulSet kind: StatefulSet
metadata: metadata:
name: helm-mongo name: helm-mongo
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
cmii.app: helm-mongo cmii.app: helm-mongo
cmii.type: middleware cmii.type: middleware
helm.sh/chart: mongo-1.1.0 helm.sh/chart: mongo-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 5.7.0
spec: spec:
serviceName: helm-mongo serviceName: helm-mongo
replicas: 1 replicas: 1
@@ -45,7 +46,7 @@ spec:
cmii.type: middleware cmii.type: middleware
helm.sh/chart: mongo-1.1.0 helm.sh/chart: mongo-1.1.0
app.kubernetes.io/managed-by: octopus-control app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 5.7.0
annotations: annotations:
pod.alpha.kubernetes.io/initialized: "true" pod.alpha.kubernetes.io/initialized: "true"
spec: spec:
@@ -54,7 +55,7 @@ spec:
affinity: { } affinity: { }
containers: containers:
- name: helm-mongo - name: helm-mongo
image: harbor.cdcyy.com.cn/cmii/mongo:5.0 image: 10.40.51.5:8033/cmii/mongo:5.0
resources: { } resources: { }
ports: ports:
- containerPort: 27017 - containerPort: 27017
@@ -64,7 +65,7 @@ spec:
- name: MONGO_INITDB_ROOT_USERNAME - name: MONGO_INITDB_ROOT_USERNAME
value: cmlc value: cmlc
- name: MONGO_INITDB_ROOT_PASSWORD - name: MONGO_INITDB_ROOT_PASSWORD
value: 7(#dD3zcz8 value: REdPza8#oVlt
volumeMounts: volumeMounts:
- name: mongo-data - name: mongo-data
mountPath: /data/db mountPath: /data/db

View File

@@ -3,11 +3,11 @@ apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
metadata: metadata:
name: helm-mysql name: helm-mysql
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd octopus.control: mysql-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
annotations: { } annotations: { }
secrets: secrets:
@@ -17,26 +17,26 @@ apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: helm-mysql name: helm-mysql
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd octopus.control: mysql-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
type: Opaque type: Opaque
data: data:
mysql-root-password: "R3d1YmM2Q3hSTQ==" mysql-root-password: "UXpmWFFoZDNiUQ=="
mysql-password: "S0F0cm5PckFKNw==" mysql-password: "S0F0cm5PckFKNw=="
--- ---
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: helm-mysql name: helm-mysql
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd octopus.control: mysql-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: primary app.kubernetes.io/component: primary
data: data:
@@ -152,11 +152,11 @@ apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: helm-mysql-init-scripts name: helm-mysql-init-scripts
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd octopus.control: mysql-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: primary app.kubernetes.io/component: primary
data: data:
@@ -169,7 +169,7 @@ data:
grant all grant all
on *.* to zyly_qc@'%'; on *.* to zyly_qc@'%';
create create
user k8s_admin@'%' identified by 'VFJncwy58^Zm'; user k8s_admin@'%' identified by 'fP#UaH6qQ3)8';
grant all grant all
on *.* to k8s_admin@'%'; on *.* to k8s_admin@'%';
create create
@@ -192,12 +192,12 @@ kind: Service
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: cmii-mysql name: cmii-mysql
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/component: primary app.kubernetes.io/component: primary
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
cmii.app: mysql cmii.app: mysql
cmii.type: middleware cmii.type: middleware
octopus.control: mysql-db-wdd octopus.control: mysql-db-wdd
@@ -210,7 +210,7 @@ spec:
selector: selector:
app.kubernetes.io/component: primary app.kubernetes.io/component: primary
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
cmii.app: mysql cmii.app: mysql
cmii.type: middleware cmii.type: middleware
type: ClusterIP type: ClusterIP
@@ -219,11 +219,11 @@ apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: helm-mysql-headless name: helm-mysql-headless
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd octopus.control: mysql-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
cmii.type: middleware cmii.type: middleware
cmii.app: mysql cmii.app: mysql
@@ -239,7 +239,7 @@ spec:
targetPort: mysql targetPort: mysql
selector: selector:
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
cmii.type: middleware cmii.type: middleware
cmii.app: mysql cmii.app: mysql
app.kubernetes.io/component: primary app.kubernetes.io/component: primary
@@ -248,11 +248,11 @@ apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: helm-mysql name: helm-mysql
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd octopus.control: mysql-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
cmii.type: middleware cmii.type: middleware
cmii.app: mysql cmii.app: mysql
@@ -265,10 +265,10 @@ spec:
port: 3306 port: 3306
protocol: TCP protocol: TCP
targetPort: mysql targetPort: mysql
nodePort: 33308 nodePort: 33306
selector: selector:
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
cmii.type: middleware cmii.type: middleware
cmii.app: mysql cmii.app: mysql
app.kubernetes.io/component: primary app.kubernetes.io/component: primary
@@ -277,11 +277,11 @@ apiVersion: apps/v1
kind: StatefulSet kind: StatefulSet
metadata: metadata:
name: helm-mysql name: helm-mysql
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd octopus.control: mysql-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
cmii.type: middleware cmii.type: middleware
cmii.app: mysql cmii.app: mysql
@@ -291,7 +291,7 @@ spec:
selector: selector:
matchLabels: matchLabels:
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
cmii.type: middleware cmii.type: middleware
cmii.app: mysql cmii.app: mysql
app.kubernetes.io/component: primary app.kubernetes.io/component: primary
@@ -305,7 +305,7 @@ spec:
labels: labels:
app.kubernetes.io/name: mysql-db app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd octopus.control: mysql-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
cmii.type: middleware cmii.type: middleware
cmii.app: mysql cmii.app: mysql
@@ -321,7 +321,7 @@ spec:
fsGroup: 1001 fsGroup: 1001
initContainers: initContainers:
- name: change-volume-permissions - name: change-volume-permissions
image: harbor.cdcyy.com.cn/cmii/bitnami-shell:11-debian-11-r136 image: 10.40.51.5:8033/cmii/bitnami-shell:11-debian-11-r136
imagePullPolicy: "Always" imagePullPolicy: "Always"
command: command:
- /bin/bash - /bin/bash
@@ -335,7 +335,7 @@ spec:
mountPath: /bitnami/mysql mountPath: /bitnami/mysql
containers: containers:
- name: mysql - name: mysql
image: harbor.cdcyy.com.cn/cmii/mysql:8.1.0-debian-11-r42 image: 10.40.51.5:8033/cmii/mysql:8.1.0-debian-11-r42
imagePullPolicy: "IfNotPresent" imagePullPolicy: "IfNotPresent"
securityContext: securityContext:
runAsUser: 1001 runAsUser: 1001
@@ -420,4 +420,4 @@ spec:
name: helm-mysql-init-scripts name: helm-mysql-init-scripts
- name: mysql-data - name: mysql-data
hostPath: hostPath:
path: /var/lib/docker/mysql-pv/uavcloud-devoperation/ path: /var/lib/docker/mysql-pv/jsntejpt/

View File

@@ -3,31 +3,31 @@ apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: helm-nacos-cm name: helm-nacos-cm
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
cmii.app: helm-nacos cmii.app: helm-nacos
cmii.type: middleware cmii.type: middleware
octopus.control: nacos-wdd octopus.control: nacos-wdd
app.kubernetes.io/managed-by: Helm app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 5.7.0
data: data:
mysql.db.name: "cmii_nacos_config" mysql.db.name: "cmii_nacos_config"
mysql.db.host: "helm-mysql" mysql.db.host: "helm-mysql"
mysql.port: "3306" mysql.port: "3306"
mysql.user: "k8s_admin" mysql.user: "k8s_admin"
mysql.password: "VFJncwy58^Zm" mysql.password: "fP#UaH6qQ3)8"
--- ---
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: helm-nacos name: helm-nacos
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
cmii.app: helm-nacos cmii.app: helm-nacos
cmii.type: middleware cmii.type: middleware
octopus.control: nacos-wdd octopus.control: nacos-wdd
app.kubernetes.io/managed-by: Helm app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 5.7.0
spec: spec:
type: NodePort type: NodePort
selector: selector:
@@ -37,7 +37,7 @@ spec:
- port: 8848 - port: 8848
name: server name: server
targetPort: 8848 targetPort: 8848
nodePort: 33850 nodePort: 38848
- port: 9848 - port: 9848
name: server12 name: server12
targetPort: 9848 targetPort: 9848
@@ -49,13 +49,13 @@ apiVersion: apps/v1
kind: StatefulSet kind: StatefulSet
metadata: metadata:
name: helm-nacos name: helm-nacos
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
cmii.app: helm-nacos cmii.app: helm-nacos
cmii.type: middleware cmii.type: middleware
octopus.control: nacos-wdd octopus.control: nacos-wdd
app.kubernetes.io/managed-by: Helm app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 5.7.0
spec: spec:
serviceName: helm-nacos serviceName: helm-nacos
replicas: 1 replicas: 1
@@ -70,7 +70,7 @@ spec:
cmii.type: middleware cmii.type: middleware
octopus.control: nacos-wdd octopus.control: nacos-wdd
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 5.7.0
annotations: annotations:
pod.alpha.kubernetes.io/initialized: "true" pod.alpha.kubernetes.io/initialized: "true"
spec: spec:
@@ -79,7 +79,7 @@ spec:
affinity: { } affinity: { }
containers: containers:
- name: nacos-server - name: nacos-server
image: harbor.cdcyy.com.cn/cmii/nacos-server:v2.1.2 image: 10.40.51.5:8033/cmii/nacos-server:v2.1.2
ports: ports:
- containerPort: 8848 - containerPort: 8848
name: dashboard name: dashboard

View File

@@ -0,0 +1,38 @@
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-prod-distribute" #与nfs-StorageClass.yaml metadata.name保持一致
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-prod-distribute
resources:
requests:
storage: 1Mi
---
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: test-pod
image: 10.40.51.5:8033/cmii/busybox:latest
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/NFS-CREATE-SUCCESS && exit 0 || exit 1" #创建一个SUCCESS文件后退出
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim #与PVC名称保持一致

View File

@@ -0,0 +1,114 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system #根据实际环境设定namespace,下面类同
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [ "" ]
resources: [ "persistentvolumes" ]
verbs: [ "get", "list", "watch", "create", "delete" ]
- apiGroups: [ "" ]
resources: [ "persistentvolumeclaims" ]
verbs: [ "get", "list", "watch", "update" ]
- apiGroups: [ "storage.k8s.io" ]
resources: [ "storageclasses" ]
verbs: [ "get", "list", "watch" ]
- apiGroups: [ "" ]
resources: [ "events" ]
verbs: [ "create", "update", "patch" ]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
roleRef:
kind: ClusterRole
# name: nfs-client-provisioner-runner
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
rules:
- apiGroups: [ "" ]
resources: [ "endpoints" ]
verbs: [ "get", "list", "watch", "create", "update", "patch" ]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-prod-distribute
provisioner: cmlc-nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致parameters: archiveOnDelete: "false"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system #与RBAC文件中的namespace保持一致
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
imagePullSecrets:
- name: harborsecret
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: 10.40.51.5:8033/cmii/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: cmlc-nfs-storage
- name: NFS_SERVER
value: 10.40.51.5
- name: NFS_PATH
value: /var/lib/docker/nfs_data
volumes:
- name: nfs-client-root
nfs:
server: 10.40.51.5
path: /var/lib/docker/nfs_data

View File

@@ -3,12 +3,12 @@ apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: nfs-backend-log-pvc name: nfs-backend-log-pvc
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
cmii.type: middleware-base cmii.type: middleware-base
cmii.app: nfs-backend-log-pvc cmii.app: nfs-backend-log-pvc
helm.sh/chart: all-persistence-volume-claims-1.1.0 helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 5.7.0
spec: spec:
storageClassName: nfs-prod-distribute storageClassName: nfs-prod-distribute
accessModes: accessModes:
@@ -22,12 +22,12 @@ apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: helm-emqxs name: helm-emqxs
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
cmii.type: middleware-base cmii.type: middleware-base
cmii.app: helm-emqxs cmii.app: helm-emqxs
helm.sh/chart: all-persistence-volume-claims-1.1.0 helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 5.7.0
spec: spec:
storageClassName: nfs-prod-distribute storageClassName: nfs-prod-distribute
accessModes: accessModes:
@@ -41,12 +41,12 @@ apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: helm-mongo name: helm-mongo
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
cmii.type: middleware-base cmii.type: middleware-base
cmii.app: helm-mongo cmii.app: helm-mongo
helm.sh/chart: all-persistence-volume-claims-1.1.0 helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 5.7.0
spec: spec:
storageClassName: nfs-prod-distribute storageClassName: nfs-prod-distribute
accessModes: accessModes:
@@ -60,12 +60,12 @@ apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: helm-rabbitmq name: helm-rabbitmq
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
cmii.type: middleware-base cmii.type: middleware-base
cmii.app: helm-rabbitmq cmii.app: helm-rabbitmq
helm.sh/chart: all-persistence-volume-claims-1.1.0 helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 5.6.0 app.kubernetes.io/version: 5.7.0
spec: spec:
storageClassName: nfs-prod-distribute storageClassName: nfs-prod-distribute
accessModes: accessModes:

View File

@@ -3,11 +3,11 @@ apiVersion: v1
kind: ServiceAccount kind: ServiceAccount
metadata: metadata:
name: helm-rabbitmq name: helm-rabbitmq
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: helm-rabbitmq app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1 helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: rabbitmq app.kubernetes.io/managed-by: rabbitmq
automountServiceAccountToken: true automountServiceAccountToken: true
secrets: secrets:
@@ -17,33 +17,33 @@ apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: helm-rabbitmq name: helm-rabbitmq
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: helm-rabbitmq app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1 helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: rabbitmq app.kubernetes.io/managed-by: rabbitmq
type: Opaque type: Opaque
data: data:
rabbitmq-password: "N3YmNyN3MWVmKVQt" rabbitmq-password: "blljUk45MXIuX2hq"
rabbitmq-erlang-cookie: "emFBRmt1ZU1xMkJieXZvdHRYbWpoWk52UThuVXFzcTU=" rabbitmq-erlang-cookie: "emFBRmt1ZU1xMkJieXZvdHRYbWpoWk52UThuVXFzcTU="
--- ---
apiVersion: v1 apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: helm-rabbitmq-config name: helm-rabbitmq-config
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: helm-rabbitmq app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1 helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: rabbitmq app.kubernetes.io/managed-by: rabbitmq
data: data:
rabbitmq.conf: |- rabbitmq.conf: |-
## Username and password ## Username and password
## ##
default_user = admin default_user = admin
default_pass = 7v&7#w1ef)T- default_pass = nYcRN91r._hj
## Clustering ## Clustering
## ##
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
@@ -63,11 +63,11 @@ kind: Role
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
metadata: metadata:
name: helm-rabbitmq-endpoint-reader name: helm-rabbitmq-endpoint-reader
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: helm-rabbitmq app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1 helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: rabbitmq app.kubernetes.io/managed-by: rabbitmq
rules: rules:
- apiGroups: [ "" ] - apiGroups: [ "" ]
@@ -81,11 +81,11 @@ kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1 apiVersion: rbac.authorization.k8s.io/v1
metadata: metadata:
name: helm-rabbitmq-endpoint-reader name: helm-rabbitmq-endpoint-reader
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: helm-rabbitmq app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1 helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: rabbitmq app.kubernetes.io/managed-by: rabbitmq
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
@@ -99,11 +99,11 @@ apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: helm-rabbitmq-headless name: helm-rabbitmq-headless
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: helm-rabbitmq app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1 helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: rabbitmq app.kubernetes.io/managed-by: rabbitmq
spec: spec:
clusterIP: None clusterIP: None
@@ -122,18 +122,18 @@ spec:
targetPort: stats targetPort: stats
selector: selector:
app.kubernetes.io/name: helm-rabbitmq app.kubernetes.io/name: helm-rabbitmq
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
publishNotReadyAddresses: true publishNotReadyAddresses: true
--- ---
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: helm-rabbitmq name: helm-rabbitmq
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: helm-rabbitmq app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1 helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: rabbitmq app.kubernetes.io/managed-by: rabbitmq
spec: spec:
type: NodePort type: NodePort
@@ -141,24 +141,24 @@ spec:
- name: amqp - name: amqp
port: 5672 port: 5672
targetPort: amqp targetPort: amqp
nodePort: 35674 nodePort: 35672
- name: dashboard - name: dashboard
port: 15672 port: 15672
targetPort: dashboard targetPort: dashboard
nodePort: 36677 nodePort: 36675
selector: selector:
app.kubernetes.io/name: helm-rabbitmq app.kubernetes.io/name: helm-rabbitmq
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
--- ---
apiVersion: apps/v1 apiVersion: apps/v1
kind: StatefulSet kind: StatefulSet
metadata: metadata:
name: helm-rabbitmq name: helm-rabbitmq
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: helm-rabbitmq app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1 helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: rabbitmq app.kubernetes.io/managed-by: rabbitmq
spec: spec:
serviceName: helm-rabbitmq-headless serviceName: helm-rabbitmq-headless
@@ -169,13 +169,13 @@ spec:
selector: selector:
matchLabels: matchLabels:
app.kubernetes.io/name: helm-rabbitmq app.kubernetes.io/name: helm-rabbitmq
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
template: template:
metadata: metadata:
labels: labels:
app.kubernetes.io/name: helm-rabbitmq app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1 helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: rabbitmq app.kubernetes.io/managed-by: rabbitmq
annotations: annotations:
checksum/config: d6c2caa9572f64a06d9f7daa34c664a186b4778cd1697ef8e59663152fc628f1 checksum/config: d6c2caa9572f64a06d9f7daa34c664a186b4778cd1697ef8e59663152fc628f1
@@ -191,7 +191,7 @@ spec:
terminationGracePeriodSeconds: 120 terminationGracePeriodSeconds: 120
initContainers: initContainers:
- name: volume-permissions - name: volume-permissions
image: harbor.cdcyy.com.cn/cmii/bitnami-shell:11-debian-11-r136 image: 10.40.51.5:8033/cmii/bitnami-shell:11-debian-11-r136
imagePullPolicy: "Always" imagePullPolicy: "Always"
command: command:
- /bin/bash - /bin/bash
@@ -210,7 +210,7 @@ spec:
mountPath: /bitnami/rabbitmq/mnesia mountPath: /bitnami/rabbitmq/mnesia
containers: containers:
- name: rabbitmq - name: rabbitmq
image: harbor.cdcyy.com.cn/cmii/rabbitmq:3.9.12-debian-10-r3 image: 10.40.51.5:8033/cmii/rabbitmq:3.9.12-debian-10-r3
imagePullPolicy: "Always" imagePullPolicy: "Always"
env: env:
- name: BITNAMI_DEBUG - name: BITNAMI_DEBUG

View File

@@ -4,22 +4,22 @@ kind: ServiceAccount
automountServiceAccountToken: true automountServiceAccountToken: true
metadata: metadata:
name: helm-redis name: helm-redis
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd octopus.control: redis-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
--- ---
apiVersion: v1 apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: helm-redis name: helm-redis
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd octopus.control: redis-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
type: Opaque type: Opaque
data: data:
@@ -29,11 +29,11 @@ apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: helm-redis-configuration name: helm-redis-configuration
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd octopus.control: redis-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
data: data:
redis.conf: |- redis.conf: |-
@@ -62,11 +62,11 @@ apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: helm-redis-health name: helm-redis-health
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd octopus.control: redis-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
data: data:
ping_readiness_local.sh: |- ping_readiness_local.sh: |-
@@ -151,11 +151,11 @@ apiVersion: v1
kind: ConfigMap kind: ConfigMap
metadata: metadata:
name: helm-redis-scripts name: helm-redis-scripts
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd octopus.control: redis-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
data: data:
start-master.sh: | start-master.sh: |
@@ -230,11 +230,11 @@ apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: helm-redis-headless name: helm-redis-headless
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd octopus.control: redis-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
spec: spec:
type: ClusterIP type: ClusterIP
@@ -245,18 +245,18 @@ spec:
targetPort: redis targetPort: redis
selector: selector:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
--- ---
# Source: outside-deploy/charts/redis-db/templates/master/service.yaml # Source: outside-deploy/charts/redis-db/templates/master/service.yaml
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: helm-redis-master name: helm-redis-master
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd octopus.control: redis-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
cmii.type: middleware cmii.type: middleware
cmii.app: redis cmii.app: redis
@@ -271,7 +271,7 @@ spec:
nodePort: null nodePort: null
selector: selector:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
cmii.type: middleware cmii.type: middleware
cmii.app: redis cmii.app: redis
app.kubernetes.io/component: master app.kubernetes.io/component: master
@@ -281,11 +281,11 @@ apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: helm-redis-replicas name: helm-redis-replicas
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd octopus.control: redis-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: replica app.kubernetes.io/component: replica
spec: spec:
@@ -297,7 +297,7 @@ spec:
nodePort: null nodePort: null
selector: selector:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/component: replica app.kubernetes.io/component: replica
--- ---
# Source: outside-deploy/charts/redis-db/templates/master/statefulset.yaml # Source: outside-deploy/charts/redis-db/templates/master/statefulset.yaml
@@ -305,11 +305,11 @@ apiVersion: apps/v1
kind: StatefulSet kind: StatefulSet
metadata: metadata:
name: helm-redis-master name: helm-redis-master
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd octopus.control: redis-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
cmii.type: middleware cmii.type: middleware
cmii.app: redis cmii.app: redis
@@ -319,7 +319,7 @@ spec:
selector: selector:
matchLabels: matchLabels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
cmii.type: middleware cmii.type: middleware
cmii.app: redis cmii.app: redis
app.kubernetes.io/component: master app.kubernetes.io/component: master
@@ -332,7 +332,7 @@ spec:
labels: labels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd octopus.control: redis-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
cmii.type: middleware cmii.type: middleware
cmii.app: redis cmii.app: redis
@@ -352,7 +352,7 @@ spec:
terminationGracePeriodSeconds: 30 terminationGracePeriodSeconds: 30
containers: containers:
- name: redis - name: redis
image: harbor.cdcyy.com.cn/cmii/redis:6.2.6-debian-10-r0 image: 10.40.51.5:8033/cmii/redis:6.2.6-debian-10-r0
imagePullPolicy: "Always" imagePullPolicy: "Always"
securityContext: securityContext:
runAsUser: 1001 runAsUser: 1001
@@ -448,11 +448,11 @@ apiVersion: apps/v1
kind: StatefulSet kind: StatefulSet
metadata: metadata:
name: helm-redis-replicas name: helm-redis-replicas
namespace: uavcloud-devoperation namespace: jsntejpt
labels: labels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd octopus.control: redis-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: replica app.kubernetes.io/component: replica
spec: spec:
@@ -460,7 +460,7 @@ spec:
selector: selector:
matchLabels: matchLabels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/component: replica app.kubernetes.io/component: replica
serviceName: helm-redis-headless serviceName: helm-redis-headless
updateStrategy: updateStrategy:
@@ -471,7 +471,7 @@ spec:
labels: labels:
app.kubernetes.io/name: redis-db app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd octopus.control: redis-db-wdd
app.kubernetes.io/release: uavcloud-devoperation app.kubernetes.io/release: jsntejpt
app.kubernetes.io/managed-by: octopus app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: replica app.kubernetes.io/component: replica
annotations: annotations:
@@ -488,7 +488,7 @@ spec:
terminationGracePeriodSeconds: 30 terminationGracePeriodSeconds: 30
containers: containers:
- name: redis - name: redis
image: harbor.cdcyy.com.cn/cmii/redis:6.2.6-debian-10-r0 image: 10.40.51.5:8033/cmii/redis:6.2.6-debian-10-r0
imagePullPolicy: "Always" imagePullPolicy: "Always"
securityContext: securityContext:
runAsUser: 1001 runAsUser: 1001
@@ -503,7 +503,7 @@ spec:
- name: REDIS_REPLICATION_MODE - name: REDIS_REPLICATION_MODE
value: slave value: slave
- name: REDIS_MASTER_HOST - name: REDIS_MASTER_HOST
value: helm-redis-master-0.helm-redis-headless.uavcloud-devoperation.svc.cluster.local value: helm-redis-master-0.helm-redis-headless.jsntejpt.svc.cluster.local
- name: REDIS_MASTER_PORT_NUMBER - name: REDIS_MASTER_PORT_NUMBER
value: "6379" value: "6379"
- name: ALLOW_EMPTY_PASSWORD - name: ALLOW_EMPTY_PASSWORD

View File

@@ -0,0 +1,496 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: helm-live-srs-cm
namespace: jsntejpt
labels:
cmii.app: live-srs
cmii.type: live
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
helm.sh/chart: cmlc-live-srs-rtc-2.0.0
data:
srs.rtc.conf: |-
listen 31935;
max_connections 4096;
srs_log_tank console;
srs_log_level info;
srs_log_file /home/srs.log;
daemon off;
http_api {
enabled on;
listen 1985;
crossdomain on;
}
stats {
network 0;
}
http_server {
enabled on;
listen 8080;
dir /home/hls;
}
srt_server {
enabled on;
listen 30556;
maxbw 1000000000;
connect_timeout 4000;
peerlatency 600;
recvlatency 600;
}
rtc_server {
enabled on;
listen 30090;
candidate $CANDIDATE;
}
vhost __defaultVhost__ {
http_hooks {
enabled on;
on_publish http://helm-live-op-svc-v2:8080/hooks/on_push;
}
http_remux {
enabled on;
}
rtc {
enabled on;
rtmp_to_rtc on;
rtc_to_rtmp on;
keep_bframe off;
}
tcp_nodelay on;
min_latency on;
play {
gop_cache off;
mw_latency 100;
mw_msgs 10;
}
publish {
firstpkt_timeout 8000;
normal_timeout 4000;
mr on;
}
dvr {
enabled off;
dvr_path /home/dvr/[app]/[stream]/[2006][01]/[timestamp].mp4;
dvr_plan session;
}
hls {
enabled on;
hls_path /home/hls;
hls_fragment 10;
hls_window 60;
hls_m3u8_file [app]/[stream].m3u8;
hls_ts_file [app]/[stream]/[2006][01][02]/[timestamp]-[duration].ts;
hls_cleanup on;
hls_entry_prefix http://10.40.51.5:8088;
}
}
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-srs-svc-exporter
namespace: jsntejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- name: rtmp
protocol: TCP
port: 30935
targetPort: 30935
nodePort: 31935
- name: rtc
protocol: UDP
port: 30090
targetPort: 30090
nodePort: 30090
- name: rtc-tcp
protocol: TCP
port: 30090
targetPort: 30090
nodePort: 30090
- name: srt
protocol: UDP
port: 30556
targetPort: 30556
nodePort: 30556
- name: api
protocol: TCP
port: 1985
targetPort: 1985
nodePort: 30080
selector:
srs-role: rtc
type: NodePort
sessionAffinity: None
externalTrafficPolicy: Cluster
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-srs-svc
namespace: jsntejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
- name: api
protocol: TCP
port: 1985
targetPort: 1985
selector:
srs-role: rtc
type: ClusterIP
sessionAffinity: None
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-srsrtc-svc
namespace: jsntejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- name: rtmp
protocol: TCP
port: 30935
targetPort: 30935
selector:
srs-role: rtc
type: ClusterIP
sessionAffinity: None
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: helm-live-srs-rtc
namespace: jsntejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
cmii.app: live-srs
cmii.type: live
helm.sh/chart: cmlc-live-srs-rtc-2.0.0
srs-role: rtc
spec:
replicas: 1
selector:
matchLabels:
srs-role: rtc
template:
metadata:
labels:
srs-role: rtc
spec:
volumes:
- name: srs-conf-file
configMap:
name: helm-live-srs-cm
items:
- key: srs.rtc.conf
path: docker.conf
defaultMode: 420
- name: srs-vol
emptyDir:
sizeLimit: 8Gi
containers:
- name: srs-rtc
image: 10.40.51.5:8033/cmii/srs:v5.0.195
ports:
- name: srs-rtmp
containerPort: 30935
protocol: TCP
- name: srs-api
containerPort: 1985
protocol: TCP
- name: srs-flv
containerPort: 8080
protocol: TCP
- name: srs-webrtc
containerPort: 30090
protocol: UDP
- name: srs-webrtc-tcp
containerPort: 30090
protocol: TCP
- name: srs-srt
containerPort: 30556
protocol: UDP
env:
- name: CANDIDATE
value: 10.40.51.5
resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: srs-conf-file
mountPath: /usr/local/srs/conf/docker.conf
subPath: docker.conf
- name: srs-vol
mountPath: /home/dvr
subPath: jsntejpt/helm-live/dvr
- name: srs-vol
mountPath: /home/hls
subPath: jsntejpt/helm-live/hls
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
- name: oss-adaptor
image: 10.40.51.5:8033/cmii/cmii-srs-oss-adaptor:2023-SA
env:
- name: OSS_ENDPOINT
value: 'http://10.40.51.5:9000'
- name: OSS_AK
value: cmii
- name: OSS_SK
value: 'B#923fC7mk'
- name: OSS_BUCKET
value: live-cluster-hls
- name: SRS_OP
value: 'http://helm-live-op-svc-v2:8080'
- name: MYSQL_ENDPOINT
value: 'helm-mysql:3306'
- name: MYSQL_USERNAME
value: k8s_admin
- name: MYSQL_PASSWORD
value: fP#UaH6qQ3)8
- name: MYSQL_DATABASE
value: cmii_live_srs_op
- name: MYSQL_TABLE
value: live_segment
- name: LOG_LEVEL
value: info
- name: OSS_META
value: 'yes'
resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: srs-vol
mountPath: /cmii/share/hls
subPath: jsntejpt/helm-live/hls
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: { }
imagePullSecrets:
- name: harborsecret
affinity: { }
schedulerName: default-scheduler
serviceName: helm-live-srsrtc-svc
podManagementPolicy: OrderedReady
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
revisionHistoryLimit: 10
---
# live-srs部分
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: helm-live-op-v2
namespace: jsntejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
cmii.app: live-engine
cmii.type: live
helm.sh/chart: cmlc-live-live-op-2.0.0
live-role: op-v2
spec:
replicas: 1
selector:
matchLabels:
live-role: op-v2
template:
metadata:
labels:
live-role: op-v2
spec:
volumes:
- name: srs-conf-file
configMap:
name: helm-live-op-cm-v2
items:
- key: live.op.conf
path: bootstrap.yaml
defaultMode: 420
containers:
- name: helm-live-op-v2
image: 10.40.51.5:8033/cmii/cmii-live-operator:5.2.0
ports:
- name: operator
containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 4800m
memory: 4Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: srs-conf-file
mountPath: /cmii/bootstrap.yaml
subPath: bootstrap.yaml
livenessProbe:
httpGet:
path: /cmii/health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /cmii/health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: { }
imagePullSecrets:
- name: harborsecret
affinity: { }
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-op-svc-v2
namespace: jsntejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30333
selector:
live-role: op-v2
type: NodePort
sessionAffinity: None
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-op-svc
namespace: jsntejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
selector:
live-role: op
type: ClusterIP
sessionAffinity: None
---
kind: ConfigMap
apiVersion: v1
metadata:
name: helm-live-op-cm-v2
namespace: jsntejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
cmii.app: live-engine
cmii.type: live
data:
live.op.conf: |-
server:
port: 8080
spring:
main:
allow-bean-definition-overriding: true
allow-circular-references: true
application:
name: cmii-live-operator
platform:
info:
name: cmii-live-operator
description: cmii-live-operator
version: 5.7.0
scanPackage: com.cmii.live.op
cloud:
nacos:
config:
username: developer
password: N@cos14Good
server-addr: helm-nacos:8848
extension-configs:
- data-id: cmii-live-operator.yml
group: 5.7.0
refresh: true
shared-configs:
- data-id: cmii-backend-system.yml
group: 5.7.0
refresh: true
discovery:
enabled: false
live:
engine:
type: srs
endpoint: 'http://helm-live-srs-svc:1985'
proto:
rtmp: 'rtmp://10.40.51.5:31935'
rtsp: 'rtsp://10.40.51.5:30554'
srt: 'srt://10.40.51.5:30556'
flv: 'http://10.40.51.5:30500'
hls: 'http://10.40.51.5:30500'
rtc: 'webrtc://10.40.51.5:30090'
replay: 'https://10.40.51.5:30333'
minio:
endpoint: http://10.40.51.5:9000
access-key: cmii
secret-key: B#923fC7mk
bucket: live-cluster-hls

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,448 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-uas
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "uas",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-mws
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "mws",
AppClientId: "APP_uKniXPELlRERBBwK"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-securityh5
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "securityh5",
AppClientId: "APP_N3ImO0Ubfu9peRHD"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-uasms
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "uasms",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-traffic
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "traffic",
AppClientId: "APP_Jc8i2wOQ1t73QEJS"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-threedsimulation
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "threedsimulation",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-media
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "media",
AppClientId: "APP_4AU8lbifESQO4FD6"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-share
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "share",
AppClientId: "APP_4lVSVI0ZGxTssir8"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-qinghaitourism
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "qinghaitourism",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-supervision
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "supervision",
AppClientId: "APP_qqSu82THfexI8PLM"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-oms
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "oms",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-multiterminal
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "multiterminal",
AppClientId: "APP_PvdfRRRBPL8xbIwl"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-security
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "security",
AppClientId: "APP_JUSEMc7afyWXxvE7"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-seniclive
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "seniclive",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-qingdao
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "qingdao",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-dispatchh5
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "dispatchh5",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-pilot2cloud
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "pilot2cloud",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-supervisionh5
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "supervisionh5",
AppClientId: "APP_qqSu82THfexI8PLM"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-detection
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "detection",
AppClientId: "APP_FDHW2VLVDWPnnOCy"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-cmsportal
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "cmsportal",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-jiangsuwenlv
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "jiangsuwenlv",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-armypeople
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "armypeople",
AppClientId: "APP_UIegse6Lfou9pO1U"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-base
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "base",
AppClientId: "APP_9LY41OaKSqk2btY0"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-splice
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "splice",
AppClientId: "APP_zE0M3sTRXrCIJS8Y"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-hljtt
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "hljtt",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-visualization
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "visualization",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-ai-brain
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "ai-brain",
AppClientId: "APP_rafnuCAmBESIVYMH"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-emergency
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "emergency",
AppClientId: "APP_aGsTAY1uMZrpKdfk"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-open
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "open",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-hyper
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "hyper",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-pangu
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "",
AppClientId: "empty"
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tenant-prefix-logistics
namespace: jxejpt
data:
ingress-config.js: |-
var __GlobalIngressConfig = {
TenantEnvironment: "",
CloudHOST: "36.138.111.244:8088",
ApplicationShortName: "logistics",
AppClientId: "APP_PvdfRRRBPL8xbIwl"
}

View File

@@ -0,0 +1,309 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 39999
selector:
k8s-app: kubernetes-dashboard
type: NodePort
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kube-system
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kube-system
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kube-system
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [ "" ]
resources: [ "secrets" ]
resourceNames: [ "kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf" ]
verbs: [ "get", "update", "delete" ]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [ "" ]
resources: [ "configmaps" ]
resourceNames: [ "kubernetes-dashboard-settings" ]
verbs: [ "get", "update" ]
# Allow Dashboard to get metrics.
- apiGroups: [ "" ]
resources: [ "services" ]
resourceNames: [ "heapster", "dashboard-metrics-scraper" ]
verbs: [ "proxy" ]
- apiGroups: [ "" ]
resources: [ "services/proxy" ]
resourceNames: [ "heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper" ]
verbs: [ "get" ]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: [ "metrics.k8s.io" ]
resources: [ "pods", "nodes" ]
verbs: [ "get", "list", "watch" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: kubernetes-dashboard
image: 10.20.1.135:8033/cmii/dashboard:v2.0.1
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kube-system
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: { }
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kube-system
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: 10.20.1.135:8033/cmii/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: { }
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

View File

@@ -0,0 +1,274 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm-emqxs
namespace: jxejpt
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-emqxs-env
namespace: jxejpt
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.0.0
data:
EMQX_CLUSTER__K8S__APISERVER: "https://kubernetes.default.svc.cluster.local:443"
EMQX_NAME: "helm-emqxs"
EMQX_CLUSTER__DISCOVERY: "k8s"
EMQX_CLUSTER__K8S__APP_NAME: "helm-emqxs"
EMQX_CLUSTER__K8S__SERVICE_NAME: "helm-emqxs-headless"
EMQX_CLUSTER__K8S__ADDRESS_TYPE: "dns"
EMQX_CLUSTER__K8S__namespace: "jxejpt"
EMQX_CLUSTER__K8S__SUFFIX: "svc.cluster.local"
EMQX_ALLOW_ANONYMOUS: "false"
EMQX_ACL_NOMATCH: "deny"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-emqxs-cm
namespace: jxejpt
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.0.0
data:
emqx_auth_mnesia.conf: |-
auth.mnesia.password_hash = sha256
# clientid 认证数据
# auth.client.1.clientid = admin
# auth.client.1.password = 4YPk*DS%+5
## username 认证数据
auth.user.1.username = admin
auth.user.1.password = odD8#Ve7.B
auth.user.2.username = cmlc
auth.user.2.password = odD8#Ve7.B
acl.conf: |-
{allow, {user, "admin"}, pubsub, ["admin/#"]}.
{allow, {user, "dashboard"}, subscribe, ["$SYS/#"]}.
{allow, {ipaddr, "127.0.0.1"}, pubsub, ["$SYS/#", "#"]}.
{deny, all, subscribe, ["$SYS/#", {eq, "#"}]}.
{allow, all}.
loaded_plugins: |-
{emqx_auth_mnesia,true}.
{emqx_auth_mnesia,true}.
{emqx_management, true}.
{emqx_recon, true}.
{emqx_retainer, false}.
{emqx_dashboard, true}.
{emqx_telemetry, true}.
{emqx_rule_engine, true}.
{emqx_bridge_mqtt, false}.
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-emqxs
namespace: jxejpt
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.0.0
spec:
replicas: 1
serviceName: helm-emqxs-headless
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
template:
metadata:
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.0.0
spec:
affinity: { }
imagePullSecrets:
- name: harborsecret
serviceAccountName: helm-emqxs
containers:
- name: helm-emqxs
image: 10.20.1.135:8033/cmii/emqx:4.4.19
imagePullPolicy: Always
ports:
- name: mqtt
containerPort: 1883
- name: mqttssl
containerPort: 8883
- name: mgmt
containerPort: 8081
- name: ws
containerPort: 8083
- name: wss
containerPort: 8084
- name: dashboard
containerPort: 18083
- name: ekka
containerPort: 4370
envFrom:
- configMapRef:
name: helm-emqxs-env
resources: { }
volumeMounts:
- name: emqx-data
mountPath: "/opt/emqx/data/mnesia"
readOnly: false
- name: helm-emqxs-cm
mountPath: "/opt/emqx/etc/plugins/emqx_auth_mnesia.conf"
subPath: emqx_auth_mnesia.conf
readOnly: false
# - name: helm-emqxs-cm
# mountPath: "/opt/emqx/etc/acl.conf"
# subPath: "acl.conf"
# readOnly: false
- name: helm-emqxs-cm
mountPath: "/opt/emqx/data/loaded_plugins"
subPath: loaded_plugins
readOnly: false
volumes:
- name: emqx-data
persistentVolumeClaim:
claimName: helm-emqxs
- name: helm-emqxs-cm
configMap:
name: helm-emqxs-cm
items:
- key: emqx_auth_mnesia.conf
path: emqx_auth_mnesia.conf
- key: acl.conf
path: acl.conf
- key: loaded_plugins
path: loaded_plugins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: helm-emqxs
namespace: jxejpt
rules:
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- watch
- list
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: helm-emqxs
namespace: jxejpt
subjects:
- kind: ServiceAccount
name: helm-emqxs
namespace: jxejpt
roleRef:
kind: Role
name: helm-emqxs
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service
metadata:
name: helm-emqxs
namespace: jxejpt
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.0.0
spec:
type: NodePort
selector:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
ports:
- port: 1883
name: mqtt
targetPort: 1883
nodePort: 31883
- port: 18083
name: dashboard
targetPort: 18083
nodePort: 38085
- port: 8083
name: mqtt-websocket
targetPort: 8083
nodePort: 38083
---
apiVersion: v1
kind: Service
metadata:
name: helm-emqxs-headless
namespace: jxejpt
labels:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
helm.sh/chart: emqx-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.0.0
spec:
type: ClusterIP
clusterIP: None
selector:
cmii.type: middleware
cmii.app: helm-emqxs
cmii.emqx.architecture: cluster
ports:
- name: mqtt
port: 1883
protocol: TCP
targetPort: 1883
- name: mqttssl
port: 8883
protocol: TCP
targetPort: 8883
- name: mgmt
port: 8081
protocol: TCP
targetPort: 8081
- name: websocket
port: 8083
protocol: TCP
targetPort: 8083
- name: wss
port: 8084
protocol: TCP
targetPort: 8084
- name: dashboard
port: 18083
protocol: TCP
targetPort: 18083
- name: ekka
port: 4370
protocol: TCP
targetPort: 4370

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,632 @@
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontend-applications-ingress
namespace: jxejpt
labels:
type: frontend
octopus.control: all-ingress-config-wdd
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.0.0
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^(/supervision)$ $1/ redirect;
rewrite ^(/supervisionh5)$ $1/ redirect;
rewrite ^(/pangu)$ $1/ redirect;
rewrite ^(/ai-brain)$ $1/ redirect;
rewrite ^(/armypeople)$ $1/ redirect;
rewrite ^(/base)$ $1/ redirect;
rewrite ^(/cmsportal)$ $1/ redirect;
rewrite ^(/detection)$ $1/ redirect;
rewrite ^(/dispatchh5)$ $1/ redirect;
rewrite ^(/emergency)$ $1/ redirect;
rewrite ^(/hljtt)$ $1/ redirect;
rewrite ^(/hyper)$ $1/ redirect;
rewrite ^(/jiangsuwenlv)$ $1/ redirect;
rewrite ^(/logistics)$ $1/ redirect;
rewrite ^(/media)$ $1/ redirect;
rewrite ^(/multiterminal)$ $1/ redirect;
rewrite ^(/mws)$ $1/ redirect;
rewrite ^(/oms)$ $1/ redirect;
rewrite ^(/open)$ $1/ redirect;
rewrite ^(/pilot2cloud)$ $1/ redirect;
rewrite ^(/qingdao)$ $1/ redirect;
rewrite ^(/qinghaitourism)$ $1/ redirect;
rewrite ^(/security)$ $1/ redirect;
rewrite ^(/securityh5)$ $1/ redirect;
rewrite ^(/seniclive)$ $1/ redirect;
rewrite ^(/share)$ $1/ redirect;
rewrite ^(/splice)$ $1/ redirect;
rewrite ^(/threedsimulation)$ $1/ redirect;
rewrite ^(/traffic)$ $1/ redirect;
rewrite ^(/uas)$ $1/ redirect;
rewrite ^(/uasms)$ $1/ redirect;
rewrite ^(/visualization)$ $1/ redirect;
spec:
rules:
- host: fake-domain.jxejpt.io
http:
paths:
- path: /?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform
servicePort: 9528
- path: /supervision/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-suav-platform-supervision
servicePort: 9528
- path: /supervisionh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-suav-platform-supervisionh5
servicePort: 9528
- path: /pangu/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform
servicePort: 9528
- path: /ai-brain/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-ai-brain
servicePort: 9528
- path: /armypeople/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-armypeople
servicePort: 9528
- path: /base/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-base
servicePort: 9528
- path: /cmsportal/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-cms-portal
servicePort: 9528
- path: /detection/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-detection
servicePort: 9528
- path: /dispatchh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-dispatchh5
servicePort: 9528
- path: /emergency/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-emergency-rescue
servicePort: 9528
- path: /hljtt/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-hljtt
servicePort: 9528
- path: /hyper/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-hyperspectral
servicePort: 9528
- path: /jiangsuwenlv/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-jiangsuwenlv
servicePort: 9528
- path: /logistics/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-logistics
servicePort: 9528
- path: /media/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-media
servicePort: 9528
- path: /multiterminal/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-multiterminal
servicePort: 9528
- path: /mws/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-mws
servicePort: 9528
- path: /oms/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-oms
servicePort: 9528
- path: /open/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-open
servicePort: 9528
- path: /pilot2cloud/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-pilot2-to-cloud
servicePort: 9528
- path: /qingdao/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-qingdao
servicePort: 9528
- path: /qinghaitourism/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-qinghaitourism
servicePort: 9528
- path: /security/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-security
servicePort: 9528
- path: /securityh5/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-securityh5
servicePort: 9528
- path: /seniclive/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-seniclive
servicePort: 9528
- path: /share/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-share
servicePort: 9528
- path: /splice/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-splice
servicePort: 9528
- path: /threedsimulation/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-threedsimulation
servicePort: 9528
- path: /traffic/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-traffic
servicePort: 9528
- path: /uas/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-uas
servicePort: 9528
- path: /uasms/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-uasms
servicePort: 9528
- path: /visualization/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-platform-visualization
servicePort: 9528
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: backend-applications-ingress
namespace: jxejpt
labels:
type: backend
octopus.control: all-ingress-config-wdd
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.0.0
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
spec:
rules:
- host: cmii-admin-data.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-admin-data
servicePort: 8080
- host: cmii-admin-gateway.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-admin-gateway
servicePort: 8080
- host: cmii-admin-user.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-admin-user
servicePort: 8080
- host: cmii-app-release.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-app-release
servicePort: 8080
- host: cmii-open-gateway.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-open-gateway
servicePort: 8080
- host: cmii-suav-supervision.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-suav-supervision
servicePort: 8080
- host: cmii-uas-gateway.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uas-gateway
servicePort: 8080
- host: cmii-uas-lifecycle.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uas-lifecycle
servicePort: 8080
- host: cmii-uav-airspace.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-airspace
servicePort: 8080
- host: cmii-uav-alarm.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-alarm
servicePort: 8080
- host: cmii-uav-autowaypoint.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-autowaypoint
servicePort: 8080
- host: cmii-uav-brain.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-brain
servicePort: 8080
- host: cmii-uav-bridge.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-bridge
servicePort: 8080
- host: cmii-uav-cloud-live.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-cloud-live
servicePort: 8080
- host: cmii-uav-clusters.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-clusters
servicePort: 8080
- host: cmii-uav-cms.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-cms
servicePort: 8080
- host: cmii-uav-data-post-process.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-data-post-process
servicePort: 8080
- host: cmii-uav-depotautoreturn.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-depotautoreturn
servicePort: 8080
- host: cmii-uav-developer.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-developer
servicePort: 8080
- host: cmii-uav-device.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-device
servicePort: 8080
- host: cmii-uav-emergency.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-emergency
servicePort: 8080
- host: cmii-uav-gateway.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-gateway
servicePort: 8080
- host: cmii-uav-gis-server.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-gis-server
servicePort: 8080
- host: cmii-uav-grid-datasource.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-grid-datasource
servicePort: 8080
- host: cmii-uav-grid-engine.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-grid-engine
servicePort: 8080
- host: cmii-uav-grid-manage.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-grid-manage
servicePort: 8080
- host: cmii-uav-industrial-portfolio.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-industrial-portfolio
servicePort: 8080
- host: cmii-uav-integration.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-integration
servicePort: 8080
- host: cmii-uav-iot-dispatcher.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-iot-dispatcher
servicePort: 8080
- host: cmii-uav-kpi-monitor.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-kpi-monitor
servicePort: 8080
- host: cmii-uav-logger.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-logger
servicePort: 8080
- host: cmii-uav-material-warehouse.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-material-warehouse
servicePort: 8080
- host: cmii-uav-mission.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-mission
servicePort: 8080
- host: cmii-uav-mqtthandler.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-mqtthandler
servicePort: 8080
- host: cmii-uav-multilink.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-multilink
servicePort: 8080
- host: cmii-uav-notice.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-notice
servicePort: 8080
- host: cmii-uav-oauth.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-oauth
servicePort: 8080
- host: cmii-uav-process.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-process
servicePort: 8080
- host: cmii-uav-sense-adapter.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-sense-adapter
servicePort: 8080
- host: cmii-uav-surveillance.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-surveillance
servicePort: 8080
- host: cmii-uav-sync.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-sync
servicePort: 8080
- host: cmii-uav-threedsimulation.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-threedsimulation
servicePort: 8080
- host: cmii-uav-tower.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-tower
servicePort: 8080
- host: cmii-uav-user.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-user
servicePort: 8080
- host: cmii-uav-waypoint.uavcloud-jxejpt.io
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-waypoint
servicePort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: all-gateways-ingress
namespace: jxejpt
labels:
type: api-gateway
octopus.control: all-ingress-config-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.0.0
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
spec:
rules:
- host: fake-domain.jxejpt.io
http:
paths:
- path: /oms/api/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-admin-gateway
servicePort: 8080
- path: /open/api/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-open-gateway
servicePort: 8080
- path: /api/?(.*)
pathType: ImplementationSpecific
backend:
serviceName: cmii-uav-gateway
servicePort: 8080

View File

@@ -0,0 +1,78 @@
---
apiVersion: v1
kind: Service
metadata:
name: helm-mongo
namespace: jxejpt
labels:
cmii.app: helm-mongo
cmii.type: middleware
helm.sh/chart: mongo-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.0.0
spec:
type: NodePort
selector:
cmii.app: helm-mongo
cmii.type: middleware
ports:
- port: 27017
name: server-27017
targetPort: 27017
nodePort: 37017
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-mongo
namespace: jxejpt
labels:
cmii.app: helm-mongo
cmii.type: middleware
helm.sh/chart: mongo-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.0.0
spec:
serviceName: helm-mongo
replicas: 1
selector:
matchLabels:
cmii.app: helm-mongo
cmii.type: middleware
template:
metadata:
labels:
cmii.app: helm-mongo
cmii.type: middleware
helm.sh/chart: mongo-1.1.0
app.kubernetes.io/managed-by: octopus-control
app.kubernetes.io/version: 6.0.0
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
imagePullSecrets:
- name: harborsecret
affinity: { }
containers:
- name: helm-mongo
image: 10.20.1.135:8033/cmii/mongo:5.0
resources: { }
ports:
- containerPort: 27017
name: mongo27017
protocol: TCP
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: cmlc
- name: MONGO_INITDB_ROOT_PASSWORD
value: REdPza8#oVlt
volumeMounts:
- name: mongo-data
mountPath: /data/db
readOnly: false
subPath: default/helm-mongo/data/db
volumes:
- name: mongo-data
persistentVolumeClaim:
claimName: helm-mongo
---

View File

@@ -0,0 +1,423 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm-mysql
namespace: jxejpt
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
annotations: { }
secrets:
- name: helm-mysql
---
apiVersion: v1
kind: Secret
metadata:
name: helm-mysql
namespace: jxejpt
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
type: Opaque
data:
mysql-root-password: "UXpmWFFoZDNiUQ=="
mysql-password: "S0F0cm5PckFKNw=="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-mysql
namespace: jxejpt
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: primary
data:
my.cnf: |-
[mysqld]
port=3306
basedir=/opt/bitnami/mysql
datadir=/bitnami/mysql/data
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
socket=/opt/bitnami/mysql/tmp/mysql.sock
log-error=/bitnami/mysql/data/error.log
general_log_file = /bitnami/mysql/data/general.log
slow_query_log_file = /bitnami/mysql/data/slow.log
innodb_data_file_path = ibdata1:512M:autoextend
innodb_buffer_pool_size = 512M
innodb_buffer_pool_instances = 2
innodb_log_file_size = 512M
innodb_log_files_in_group = 4
innodb_log_files_in_group = 4
log-bin = /bitnami/mysql/data/mysql-bin
max_binlog_size=1G
transaction_isolation = REPEATABLE-READ
default_storage_engine = innodb
character-set-server = utf8mb4
collation-server=utf8mb4_bin
binlog_format = ROW
binlog_rows_query_log_events=on
binlog_cache_size=4M
binlog_expire_logs_seconds = 1296000
max_binlog_cache_size=2G
gtid_mode = on
enforce_gtid_consistency = 1
sync_binlog = 1
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT
log_slave_updates=1
relay_log_recovery = 1
relay-log-purge = 1
default_time_zone = '+08:00'
lower_case_table_names=1
log_bin_trust_function_creators=1
group_concat_max_len=67108864
innodb_io_capacity = 4000
innodb_io_capacity_max = 8000
innodb_flush_sync = 0
innodb_flush_neighbors = 0
innodb_write_io_threads = 8
innodb_read_io_threads = 8
innodb_purge_threads = 4
innodb_page_cleaners = 4
innodb_open_files = 65535
innodb_max_dirty_pages_pct = 50
innodb_lru_scan_depth = 4000
innodb_checksum_algorithm = crc32
innodb_lock_wait_timeout = 10
innodb_rollback_on_timeout = 1
innodb_print_all_deadlocks = 1
innodb_file_per_table = 1
innodb_online_alter_log_max_size = 4G
innodb_stats_on_metadata = 0
innodb_thread_concurrency = 0
innodb_sync_spin_loops = 100
innodb_spin_wait_delay = 30
lock_wait_timeout = 3600
slow_query_log = 1
long_query_time = 10
log_queries_not_using_indexes =1
log_throttle_queries_not_using_indexes = 60
min_examined_row_limit = 100
log_slow_admin_statements = 1
log_slow_slave_statements = 1
default_authentication_plugin=mysql_native_password
skip-name-resolve=1
explicit_defaults_for_timestamp=1
plugin_dir=/opt/bitnami/mysql/plugin
max_allowed_packet=128M
max_connections = 2000
max_connect_errors = 1000000
table_definition_cache=2000
table_open_cache_instances=64
tablespace_definition_cache=1024
thread_cache_size=256
interactive_timeout = 600
wait_timeout = 600
tmpdir=/opt/bitnami/mysql/tmp
max_allowed_packet=32M
bind-address=0.0.0.0
performance_schema = 1
performance_schema_instrument = '%memory%=on'
performance_schema_instrument = '%lock%=on'
innodb_monitor_enable=ALL
[mysql]
no-auto-rehash
[mysqldump]
quick
max_allowed_packet = 32M
[client]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
default-character-set=UTF8
plugin_dir=/opt/bitnami/mysql/plugin
[manager]
port=3306
socket=/opt/bitnami/mysql/tmp/mysql.sock
pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-mysql-init-scripts
namespace: jxejpt
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: primary
data:
create_users_grants_core.sql: |-
create
user zyly@'%' identified by 'Cmii@451315';
grant select on *.* to zyly@'%';
create
user zyly_qc@'%' identified by 'Uh)E_owCyb16';
grant all
on *.* to zyly_qc@'%';
create
user k8s_admin@'%' identified by 'fP#UaH6qQ3)8';
grant all
on *.* to k8s_admin@'%';
create
user audit_dba@'%' identified by 'PjCzqiBmJaTpgkoYXynH';
grant all
on *.* to audit_dba@'%';
create
user db_backup@'%' identified by 'RU5Pu(4FGdT9';
GRANT
SELECT, RELOAD, PROCESS, LOCK TABLES, REPLICATION CLIENT, EVENT
on *.* to db_backup@'%';
create
user monitor@'%' identified by 'PL3#nGtrWbf-';
grant REPLICATION
CLIENT on *.* to monitor@'%';
flush
privileges;
---
kind: Service
apiVersion: v1
metadata:
name: cmii-mysql
namespace: jxejpt
labels:
app.kubernetes.io/component: primary
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: jxejpt
cmii.app: mysql
cmii.type: middleware
octopus.control: mysql-db-wdd
spec:
ports:
- name: mysql
protocol: TCP
port: 13306
targetPort: mysql
selector:
app.kubernetes.io/component: primary
app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: jxejpt
cmii.app: mysql
cmii.type: middleware
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: helm-mysql-headless
namespace: jxejpt
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
annotations: { }
spec:
type: ClusterIP
clusterIP: None
publishNotReadyAddresses: true
ports:
- name: mysql
port: 3306
targetPort: mysql
selector:
app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: jxejpt
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
---
apiVersion: v1
kind: Service
metadata:
name: helm-mysql
namespace: jxejpt
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
annotations: { }
spec:
type: NodePort
ports:
- name: mysql
port: 3306
protocol: TCP
targetPort: mysql
nodePort: 33306
selector:
app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: jxejpt
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-mysql
namespace: jxejpt
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: mysql-db
app.kubernetes.io/release: jxejpt
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
serviceName: helm-mysql
updateStrategy:
type: RollingUpdate
template:
metadata:
annotations:
checksum/configuration: 6b60fa0f3a846a6ada8effdc4f823cf8003d42a8c8f630fe8b1b66d3454082dd
labels:
app.kubernetes.io/name: mysql-db
octopus.control: mysql-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: mysql
app.kubernetes.io/component: primary
spec:
imagePullSecrets:
- name: harborsecret
serviceAccountName: helm-mysql
affinity: { }
nodeSelector:
mysql-deploy: "true"
securityContext:
fsGroup: 1001
initContainers:
- name: change-volume-permissions
image: 10.20.1.135:8033/cmii/bitnami-shell:11-debian-11-r136
imagePullPolicy: "Always"
command:
- /bin/bash
- -ec
- |
chown -R 1001:1001 /bitnami/mysql
securityContext:
runAsUser: 0
volumeMounts:
- name: mysql-data
mountPath: /bitnami/mysql
containers:
- name: mysql
image: 10.20.1.135:8033/cmii/mysql:8.1.0-debian-11-r42
imagePullPolicy: "IfNotPresent"
securityContext:
runAsUser: 1001
env:
- name: BITNAMI_DEBUG
value: "true"
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: helm-mysql
key: mysql-root-password
- name: MYSQL_DATABASE
value: "cmii"
ports:
- name: mysql
containerPort: 3306
livenessProbe:
failureThreshold: 5
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
exec:
command:
- /bin/bash
- -ec
- |
password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
readinessProbe:
failureThreshold: 5
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 3
exec:
command:
- /bin/bash
- -ec
- |
password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
startupProbe:
failureThreshold: 60
initialDelaySeconds: 120
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
exec:
command:
- /bin/bash
- -ec
- |
password_aux="${MYSQL_ROOT_PASSWORD:-}"
if [[ -f "${MYSQL_ROOT_PASSWORD_FILE:-}" ]]; then
password_aux=$(cat "$MYSQL_ROOT_PASSWORD_FILE")
fi
mysqladmin status -uroot -p"${password_aux}"
resources:
limits: { }
requests: { }
volumeMounts:
- name: mysql-data
mountPath: /bitnami/mysql
- name: custom-init-scripts
mountPath: /docker-entrypoint-initdb.d
- name: config
mountPath: /opt/bitnami/mysql/conf/my.cnf
subPath: my.cnf
volumes:
- name: config
configMap:
name: helm-mysql
- name: custom-init-scripts
configMap:
name: helm-mysql-init-scripts
- name: mysql-data
hostPath:
path: /var/lib/docker/mysql-pv/jxejpt/

View File

@@ -0,0 +1,130 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-nacos-cm
namespace: jxejpt
labels:
cmii.app: helm-nacos
cmii.type: middleware
octopus.control: nacos-wdd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 6.0.0
data:
mysql.db.name: "cmii_nacos_config"
mysql.db.host: "helm-mysql"
mysql.port: "3306"
mysql.user: "k8s_admin"
mysql.password: "fP#UaH6qQ3)8"
---
apiVersion: v1
kind: Service
metadata:
name: helm-nacos
namespace: jxejpt
labels:
cmii.app: helm-nacos
cmii.type: middleware
octopus.control: nacos-wdd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 6.0.0
spec:
type: NodePort
selector:
cmii.app: helm-nacos
cmii.type: middleware
ports:
- port: 8848
name: server
targetPort: 8848
nodePort: 38848
- port: 9848
name: server12
targetPort: 9848
- port: 9849
name: server23
targetPort: 9849
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-nacos
namespace: jxejpt
labels:
cmii.app: helm-nacos
cmii.type: middleware
octopus.control: nacos-wdd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/version: 6.0.0
spec:
serviceName: helm-nacos
replicas: 1
selector:
matchLabels:
cmii.app: helm-nacos
cmii.type: middleware
template:
metadata:
labels:
cmii.app: helm-nacos
cmii.type: middleware
octopus.control: nacos-wdd
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/version: 6.0.0
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
imagePullSecrets:
- name: harborsecret
affinity: { }
containers:
- name: nacos-server
image: 10.20.1.135:8033/cmii/nacos-server:v2.1.2
ports:
- containerPort: 8848
name: dashboard
- containerPort: 9848
name: tcp-9848
- containerPort: 9849
name: tcp-9849
env:
- name: NACOS_AUTH_ENABLE
value: "false"
- name: NACOS_REPLICAS
value: "1"
- name: MYSQL_SERVICE_DB_NAME
valueFrom:
configMapKeyRef:
name: helm-nacos-cm
key: mysql.db.name
- name: MYSQL_SERVICE_PORT
valueFrom:
configMapKeyRef:
name: helm-nacos-cm
key: mysql.port
- name: MYSQL_SERVICE_USER
valueFrom:
configMapKeyRef:
name: helm-nacos-cm
key: mysql.user
- name: MYSQL_SERVICE_PASSWORD
valueFrom:
configMapKeyRef:
name: helm-nacos-cm
key: mysql.password
- name: MYSQL_SERVICE_HOST
valueFrom:
configMapKeyRef:
name: helm-nacos-cm
key: mysql.db.host
- name: NACOS_SERVER_PORT
value: "8848"
- name: NACOS_APPLICATION_PORT
value: "8848"
- name: PREFER_HOST_MODE
value: "hostname"
- name: MODE
value: standalone
- name: SPRING_DATASOURCE_PLATFORM
value: mysql
---

View File

@@ -0,0 +1,38 @@
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-prod-distribute" #与nfs-StorageClass.yaml metadata.name保持一致
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-prod-distribute
resources:
requests:
storage: 1Mi
---
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
imagePullSecrets:
- name: harborsecret
containers:
- name: test-pod
image: 10.20.1.135:8033/cmii/busybox:latest
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/NFS-CREATE-SUCCESS && exit 0 || exit 1" #创建一个SUCCESS文件后退出
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim #与PVC名称保持一致

View File

@@ -0,0 +1,114 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system #根据实际环境设定namespace,下面类同
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [ "" ]
resources: [ "persistentvolumes" ]
verbs: [ "get", "list", "watch", "create", "delete" ]
- apiGroups: [ "" ]
resources: [ "persistentvolumeclaims" ]
verbs: [ "get", "list", "watch", "update" ]
- apiGroups: [ "storage.k8s.io" ]
resources: [ "storageclasses" ]
verbs: [ "get", "list", "watch" ]
- apiGroups: [ "" ]
resources: [ "events" ]
verbs: [ "create", "update", "patch" ]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
roleRef:
kind: ClusterRole
# name: nfs-client-provisioner-runner
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
rules:
- apiGroups: [ "" ]
resources: [ "endpoints" ]
verbs: [ "get", "list", "watch", "create", "update", "patch" ]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-prod-distribute
provisioner: cmlc-nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致parameters: archiveOnDelete: "false"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: kube-system #与RBAC文件中的namespace保持一致
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
imagePullSecrets:
- name: harborsecret
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: 10.20.1.135:8033/cmii/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: cmlc-nfs-storage
- name: NFS_SERVER
value: 10.20.1.135
- name: NFS_PATH
value: /var/lib/docker/nfs_data
volumes:
- name: nfs-client-root
nfs:
server: 10.20.1.135
path: /var/lib/docker/nfs_data

View File

@@ -0,0 +1,76 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-backend-log-pvc
namespace: jxejpt
labels:
cmii.type: middleware-base
cmii.app: nfs-backend-log-pvc
helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 6.0.0
spec:
storageClassName: nfs-prod-distribute
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: helm-emqxs
namespace: jxejpt
labels:
cmii.type: middleware-base
cmii.app: helm-emqxs
helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 6.0.0
spec:
storageClassName: nfs-prod-distribute
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: helm-mongo
namespace: jxejpt
labels:
cmii.type: middleware-base
cmii.app: helm-mongo
helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 6.0.0
spec:
storageClassName: nfs-prod-distribute
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 30Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: helm-rabbitmq
namespace: jxejpt
labels:
cmii.type: middleware-base
cmii.app: helm-rabbitmq
helm.sh/chart: all-persistence-volume-claims-1.1.0
app.kubernetes.io/version: 6.0.0
spec:
storageClassName: nfs-prod-distribute
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 20Gi

View File

@@ -0,0 +1,328 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: helm-rabbitmq
namespace: jxejpt
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: rabbitmq
automountServiceAccountToken: true
secrets:
- name: helm-rabbitmq
---
apiVersion: v1
kind: Secret
metadata:
name: helm-rabbitmq
namespace: jxejpt
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: rabbitmq
type: Opaque
data:
rabbitmq-password: "blljUk45MXIuX2hq"
rabbitmq-erlang-cookie: "emFBRmt1ZU1xMkJieXZvdHRYbWpoWk52UThuVXFzcTU="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-rabbitmq-config
namespace: jxejpt
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: rabbitmq
data:
rabbitmq.conf: |-
## Username and password
##
default_user = admin
default_pass = nYcRN91r._hj
## Clustering
##
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.node_cleanup.interval = 10
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
# queue master locator
queue_master_locator = min-masters
# enable guest user
loopback_users.guest = false
#default_vhost = default-vhost
#disk_free_limit.absolute = 50MB
#load_definitions = /app/load_definition.json
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: helm-rabbitmq-endpoint-reader
namespace: jxejpt
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: rabbitmq
rules:
- apiGroups: [ "" ]
resources: [ "endpoints" ]
verbs: [ "get" ]
- apiGroups: [ "" ]
resources: [ "events" ]
verbs: [ "create" ]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: helm-rabbitmq-endpoint-reader
namespace: jxejpt
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: rabbitmq
subjects:
- kind: ServiceAccount
name: helm-rabbitmq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: helm-rabbitmq-endpoint-reader
---
apiVersion: v1
kind: Service
metadata:
name: helm-rabbitmq-headless
namespace: jxejpt
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: rabbitmq
spec:
clusterIP: None
ports:
- name: epmd
port: 4369
targetPort: epmd
- name: amqp
port: 5672
targetPort: amqp
- name: dist
port: 25672
targetPort: dist
- name: dashboard
port: 15672
targetPort: stats
selector:
app.kubernetes.io/name: helm-rabbitmq
app.kubernetes.io/release: jxejpt
publishNotReadyAddresses: true
---
apiVersion: v1
kind: Service
metadata:
name: helm-rabbitmq
namespace: jxejpt
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: rabbitmq
spec:
type: NodePort
ports:
- name: amqp
port: 5672
targetPort: amqp
nodePort: 35672
- name: dashboard
port: 15672
targetPort: dashboard
nodePort: 36675
selector:
app.kubernetes.io/name: helm-rabbitmq
app.kubernetes.io/release: jxejpt
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-rabbitmq
namespace: jxejpt
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: rabbitmq
spec:
serviceName: helm-rabbitmq-headless
podManagementPolicy: OrderedReady
replicas: 1
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: helm-rabbitmq
app.kubernetes.io/release: jxejpt
template:
metadata:
labels:
app.kubernetes.io/name: helm-rabbitmq
helm.sh/chart: rabbitmq-8.26.1
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: rabbitmq
annotations:
checksum/config: d6c2caa9572f64a06d9f7daa34c664a186b4778cd1697ef8e59663152fc628f1
checksum/secret: d764e7b3d999e7324d1afdfec6140092a612f04b6e0306818675815cec2f454f
spec:
imagePullSecrets:
- name: harborsecret
serviceAccountName: helm-rabbitmq
affinity: { }
securityContext:
fsGroup: 5001
runAsUser: 5001
terminationGracePeriodSeconds: 120
initContainers:
- name: volume-permissions
image: 10.20.1.135:8033/cmii/bitnami-shell:11-debian-11-r136
imagePullPolicy: "Always"
command:
- /bin/bash
args:
- -ec
- |
mkdir -p "/bitnami/rabbitmq/mnesia"
chown -R "5001:5001" "/bitnami/rabbitmq/mnesia"
securityContext:
runAsUser: 0
resources:
limits: { }
requests: { }
volumeMounts:
- name: data
mountPath: /bitnami/rabbitmq/mnesia
containers:
- name: rabbitmq
image: 10.20.1.135:8033/cmii/rabbitmq:3.9.12-debian-10-r3
imagePullPolicy: "Always"
env:
- name: BITNAMI_DEBUG
value: "false"
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: K8S_SERVICE_NAME
value: "helm-rabbitmq-headless"
- name: K8S_ADDRESS_TYPE
value: hostname
- name: RABBITMQ_FORCE_BOOT
value: "no"
- name: RABBITMQ_NODE_NAME
value: "rabbit@$(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local"
- name: K8S_HOSTNAME_SUFFIX
value: ".$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local"
- name: RABBITMQ_MNESIA_DIR
value: "/bitnami/rabbitmq/mnesia/$(RABBITMQ_NODE_NAME)"
- name: RABBITMQ_LDAP_ENABLE
value: "no"
- name: RABBITMQ_LOGS
value: "-"
- name: RABBITMQ_ULIMIT_NOFILES
value: "65536"
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_ERL_COOKIE
valueFrom:
secretKeyRef:
name: helm-rabbitmq
key: rabbitmq-erlang-cookie
- name: RABBITMQ_LOAD_DEFINITIONS
value: "no"
- name: RABBITMQ_SECURE_PASSWORD
value: "yes"
- name: RABBITMQ_USERNAME
value: "admin"
- name: RABBITMQ_PASSWORD
valueFrom:
secretKeyRef:
name: helm-rabbitmq
key: rabbitmq-password
- name: RABBITMQ_PLUGINS
value: "rabbitmq_management, rabbitmq_peer_discovery_k8s, rabbitmq_shovel, rabbitmq_shovel_management, rabbitmq_auth_backend_ldap"
ports:
- name: amqp
containerPort: 5672
- name: dist
containerPort: 25672
- name: dashboard
containerPort: 15672
- name: epmd
containerPort: 4369
livenessProbe:
exec:
command:
- /bin/bash
- -ec
- rabbitmq-diagnostics -q ping
initialDelaySeconds: 120
periodSeconds: 30
timeoutSeconds: 20
successThreshold: 1
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/bash
- -ec
- rabbitmq-diagnostics -q check_running && rabbitmq-diagnostics -q check_local_alarms
initialDelaySeconds: 10
periodSeconds: 30
timeoutSeconds: 20
successThreshold: 1
failureThreshold: 3
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -ec
- |
if [[ -f /opt/bitnami/scripts/rabbitmq/nodeshutdown.sh ]]; then
/opt/bitnami/scripts/rabbitmq/nodeshutdown.sh -t "120" -d "false"
else
rabbitmqctl stop_app
fi
resources:
limits: { }
requests: { }
volumeMounts:
- name: configuration
mountPath: /bitnami/rabbitmq/conf
- name: data
mountPath: /bitnami/rabbitmq/mnesia
volumes:
- name: configuration
configMap:
name: helm-rabbitmq-config
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- name: data
persistentVolumeClaim:
claimName: helm-rabbitmq

View File

@@ -0,0 +1,585 @@
---
apiVersion: v1
kind: ServiceAccount
automountServiceAccountToken: true
metadata:
name: helm-redis
namespace: jxejpt
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
---
apiVersion: v1
kind: Secret
metadata:
name: helm-redis
namespace: jxejpt
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
type: Opaque
data:
redis-password: "TWNhY2hlQDQ1MjI="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-redis-configuration
namespace: jxejpt
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
data:
redis.conf: |-
# User-supplied common configuration:
# Enable AOF https://redis.io/topics/persistence#append-only-file
appendonly yes
# Disable RDB persistence, AOF persistence already enabled.
save ""
# End of common configuration
master.conf: |-
dir /data
# User-supplied master configuration:
rename-command FLUSHDB ""
rename-command FLUSHALL ""
# End of master configuration
replica.conf: |-
dir /data
slave-read-only yes
# User-supplied replica configuration:
rename-command FLUSHDB ""
rename-command FLUSHALL ""
# End of replica configuration
---
# Source: outside-deploy/charts/redis-db/templates/health-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-redis-health
namespace: jxejpt
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
data:
ping_readiness_local.sh: |-
#!/bin/bash
[[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
[[ -n "$REDIS_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_PASSWORD"
response=$(
timeout -s 3 $1 \
redis-cli \
-h localhost \
-p $REDIS_PORT \
ping
)
if [ "$response" != "PONG" ]; then
echo "$response"
exit 1
fi
ping_liveness_local.sh: |-
#!/bin/bash
[[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
[[ -n "$REDIS_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_PASSWORD"
response=$(
timeout -s 3 $1 \
redis-cli \
-h localhost \
-p $REDIS_PORT \
ping
)
if [ "$response" != "PONG" ] && [ "$response" != "LOADING Redis is loading the dataset in memory" ]; then
echo "$response"
exit 1
fi
ping_readiness_master.sh: |-
#!/bin/bash
[[ -f $REDIS_MASTER_PASSWORD_FILE ]] && export REDIS_MASTER_PASSWORD="$(< "${REDIS_MASTER_PASSWORD_FILE}")"
[[ -n "$REDIS_MASTER_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_MASTER_PASSWORD"
response=$(
timeout -s 3 $1 \
redis-cli \
-h $REDIS_MASTER_HOST \
-p $REDIS_MASTER_PORT_NUMBER \
ping
)
if [ "$response" != "PONG" ]; then
echo "$response"
exit 1
fi
ping_liveness_master.sh: |-
#!/bin/bash
[[ -f $REDIS_MASTER_PASSWORD_FILE ]] && export REDIS_MASTER_PASSWORD="$(< "${REDIS_MASTER_PASSWORD_FILE}")"
[[ -n "$REDIS_MASTER_PASSWORD" ]] && export REDISCLI_AUTH="$REDIS_MASTER_PASSWORD"
response=$(
timeout -s 3 $1 \
redis-cli \
-h $REDIS_MASTER_HOST \
-p $REDIS_MASTER_PORT_NUMBER \
ping
)
if [ "$response" != "PONG" ] && [ "$response" != "LOADING Redis is loading the dataset in memory" ]; then
echo "$response"
exit 1
fi
ping_readiness_local_and_master.sh: |-
script_dir="$(dirname "$0")"
exit_status=0
"$script_dir/ping_readiness_local.sh" $1 || exit_status=$?
"$script_dir/ping_readiness_master.sh" $1 || exit_status=$?
exit $exit_status
ping_liveness_local_and_master.sh: |-
script_dir="$(dirname "$0")"
exit_status=0
"$script_dir/ping_liveness_local.sh" $1 || exit_status=$?
"$script_dir/ping_liveness_master.sh" $1 || exit_status=$?
exit $exit_status
---
# Source: outside-deploy/charts/redis-db/templates/scripts-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: helm-redis-scripts
namespace: jxejpt
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
data:
start-master.sh: |
#!/bin/bash
[[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
if [[ ! -f /opt/bitnami/redis/etc/master.conf ]];then
cp /opt/bitnami/redis/mounted-etc/master.conf /opt/bitnami/redis/etc/master.conf
fi
if [[ ! -f /opt/bitnami/redis/etc/redis.conf ]];then
cp /opt/bitnami/redis/mounted-etc/redis.conf /opt/bitnami/redis/etc/redis.conf
fi
ARGS=("--port" "${REDIS_PORT}")
ARGS+=("--requirepass" "${REDIS_PASSWORD}")
ARGS+=("--masterauth" "${REDIS_PASSWORD}")
ARGS+=("--include" "/opt/bitnami/redis/etc/redis.conf")
ARGS+=("--include" "/opt/bitnami/redis/etc/master.conf")
exec redis-server "${ARGS[@]}"
start-replica.sh: |
#!/bin/bash
get_port() {
hostname="$1"
type="$2"
port_var=$(echo "${hostname^^}_SERVICE_PORT_$type" | sed "s/-/_/g")
port=${!port_var}
if [ -z "$port" ]; then
case $type in
"SENTINEL")
echo 26379
;;
"REDIS")
echo 6379
;;
esac
else
echo $port
fi
}
get_full_hostname() {
hostname="$1"
echo "${hostname}.${HEADLESS_SERVICE}"
}
REDISPORT=$(get_port "$HOSTNAME" "REDIS")
[[ -f $REDIS_PASSWORD_FILE ]] && export REDIS_PASSWORD="$(< "${REDIS_PASSWORD_FILE}")"
[[ -f $REDIS_MASTER_PASSWORD_FILE ]] && export REDIS_MASTER_PASSWORD="$(< "${REDIS_MASTER_PASSWORD_FILE}")"
if [[ ! -f /opt/bitnami/redis/etc/replica.conf ]];then
cp /opt/bitnami/redis/mounted-etc/replica.conf /opt/bitnami/redis/etc/replica.conf
fi
if [[ ! -f /opt/bitnami/redis/etc/redis.conf ]];then
cp /opt/bitnami/redis/mounted-etc/redis.conf /opt/bitnami/redis/etc/redis.conf
fi
echo "" >> /opt/bitnami/redis/etc/replica.conf
echo "replica-announce-port $REDISPORT" >> /opt/bitnami/redis/etc/replica.conf
echo "replica-announce-ip $(get_full_hostname "$HOSTNAME")" >> /opt/bitnami/redis/etc/replica.conf
ARGS=("--port" "${REDIS_PORT}")
ARGS+=("--slaveof" "${REDIS_MASTER_HOST}" "${REDIS_MASTER_PORT_NUMBER}")
ARGS+=("--requirepass" "${REDIS_PASSWORD}")
ARGS+=("--masterauth" "${REDIS_MASTER_PASSWORD}")
ARGS+=("--include" "/opt/bitnami/redis/etc/redis.conf")
ARGS+=("--include" "/opt/bitnami/redis/etc/replica.conf")
exec redis-server "${ARGS[@]}"
---
# Source: outside-deploy/charts/redis-db/templates/headless-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: helm-redis-headless
namespace: jxejpt
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
spec:
type: ClusterIP
clusterIP: None
ports:
- name: tcp-redis
port: 6379
targetPort: redis
selector:
app.kubernetes.io/name: redis-db
app.kubernetes.io/release: jxejpt
---
# Source: outside-deploy/charts/redis-db/templates/master/service.yaml
apiVersion: v1
kind: Service
metadata:
name: helm-redis-master
namespace: jxejpt
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: redis
app.kubernetes.io/component: master
spec:
type: ClusterIP
ports:
- name: tcp-redis
port: 6379
targetPort: redis
nodePort: null
selector:
app.kubernetes.io/name: redis-db
app.kubernetes.io/release: jxejpt
cmii.type: middleware
cmii.app: redis
app.kubernetes.io/component: master
---
# Source: outside-deploy/charts/redis-db/templates/replicas/service.yaml
apiVersion: v1
kind: Service
metadata:
name: helm-redis-replicas
namespace: jxejpt
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: replica
spec:
type: ClusterIP
ports:
- name: tcp-redis
port: 6379
targetPort: redis
nodePort: null
selector:
app.kubernetes.io/name: redis-db
app.kubernetes.io/release: jxejpt
app.kubernetes.io/component: replica
---
# Source: outside-deploy/charts/redis-db/templates/master/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-redis-master
namespace: jxejpt
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: redis
app.kubernetes.io/component: master
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: redis-db
app.kubernetes.io/release: jxejpt
cmii.type: middleware
cmii.app: redis
app.kubernetes.io/component: master
serviceName: helm-redis-headless
updateStrategy:
rollingUpdate: { }
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
cmii.type: middleware
cmii.app: redis
app.kubernetes.io/component: master
annotations:
checksum/configmap: b64aa5db67e6e63811f3c1095b9fce34d83c86a471fccdda0e48eedb53a179b0
checksum/health: 6e0a6330e5ac63e565ae92af1444527d72d8897f91266f333555b3d323570623
checksum/scripts: b88df93710b7c42a76006e20218f05c6e500e6cc2affd4bb1985832f03166e98
checksum/secret: 43f1b0e20f9cb2de936bd182bc3683b720fc3cf4f4e76cb23c06a52398a50e8d
spec:
affinity: { }
securityContext:
fsGroup: 1001
serviceAccountName: helm-redis
imagePullSecrets:
- name: harborsecret
terminationGracePeriodSeconds: 30
containers:
- name: redis
image: 10.20.1.135:8033/cmii/redis:6.2.6-debian-10-r0
imagePullPolicy: "Always"
securityContext:
runAsUser: 1001
command:
- /bin/bash
args:
- -c
- /opt/bitnami/scripts/start-scripts/start-master.sh
env:
- name: BITNAMI_DEBUG
value: "false"
- name: REDIS_REPLICATION_MODE
value: master
- name: ALLOW_EMPTY_PASSWORD
value: "no"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: helm-redis
key: redis-password
- name: REDIS_TLS_ENABLED
value: "no"
- name: REDIS_PORT
value: "6379"
ports:
- name: redis
containerPort: 6379
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 5
# One second longer than command timeout should prevent generation of zombie processes.
timeoutSeconds: 6
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /health/ping_liveness_local.sh 5
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 2
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /health/ping_readiness_local.sh 1
resources:
limits:
cpu: "2"
memory: 8Gi
requests:
cpu: "2"
memory: 8Gi
volumeMounts:
- name: start-scripts
mountPath: /opt/bitnami/scripts/start-scripts
- name: health
mountPath: /health
- name: redis-data
mountPath: /data
subPath:
- name: config
mountPath: /opt/bitnami/redis/mounted-etc
- name: redis-tmp-conf
mountPath: /opt/bitnami/redis/etc/
- name: tmp
mountPath: /tmp
volumes:
- name: start-scripts
configMap:
name: helm-redis-scripts
defaultMode: 0755
- name: health
configMap:
name: helm-redis-health
defaultMode: 0755
- name: config
configMap:
name: helm-redis-configuration
- name: redis-tmp-conf
emptyDir: { }
- name: tmp
emptyDir: { }
- name: redis-data
emptyDir: { }
---
# Source: outside-deploy/charts/redis-db/templates/replicas/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: helm-redis-replicas
namespace: jxejpt
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: replica
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: redis-db
app.kubernetes.io/release: jxejpt
app.kubernetes.io/component: replica
serviceName: helm-redis-headless
updateStrategy:
rollingUpdate: { }
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: redis-db
octopus.control: redis-db-wdd
app.kubernetes.io/release: jxejpt
app.kubernetes.io/managed-by: octopus
app.kubernetes.io/component: replica
annotations:
checksum/configmap: b64aa5db67e6e63811f3c1095b9fce34d83c86a471fccdda0e48eedb53a179b0
checksum/health: 6e0a6330e5ac63e565ae92af1444527d72d8897f91266f333555b3d323570623
checksum/scripts: b88df93710b7c42a76006e20218f05c6e500e6cc2affd4bb1985832f03166e98
checksum/secret: 43f1b0e20f9cb2de936bd182bc3683b720fc3cf4f4e76cb23c06a52398a50e8d
spec:
imagePullSecrets:
- name: harborsecret
securityContext:
fsGroup: 1001
serviceAccountName: helm-redis
terminationGracePeriodSeconds: 30
containers:
- name: redis
image: 10.20.1.135:8033/cmii/redis:6.2.6-debian-10-r0
imagePullPolicy: "Always"
securityContext:
runAsUser: 1001
command:
- /bin/bash
args:
- -c
- /opt/bitnami/scripts/start-scripts/start-replica.sh
env:
- name: BITNAMI_DEBUG
value: "false"
- name: REDIS_REPLICATION_MODE
value: slave
- name: REDIS_MASTER_HOST
value: helm-redis-master-0.helm-redis-headless.jxejpt.svc.cluster.local
- name: REDIS_MASTER_PORT_NUMBER
value: "6379"
- name: ALLOW_EMPTY_PASSWORD
value: "no"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: helm-redis
key: redis-password
- name: REDIS_MASTER_PASSWORD
valueFrom:
secretKeyRef:
name: helm-redis
key: redis-password
- name: REDIS_TLS_ENABLED
value: "no"
- name: REDIS_PORT
value: "6379"
ports:
- name: redis
containerPort: 6379
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 6
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /health/ping_liveness_local_and_master.sh 5
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 5
timeoutSeconds: 2
successThreshold: 1
failureThreshold: 5
exec:
command:
- sh
- -c
- /health/ping_readiness_local_and_master.sh 1
resources:
limits:
cpu: "2"
memory: 8Gi
requests:
cpu: "2"
memory: 8Gi
volumeMounts:
- name: start-scripts
mountPath: /opt/bitnami/scripts/start-scripts
- name: health
mountPath: /health
- name: redis-data
mountPath: /data
subPath:
- name: config
mountPath: /opt/bitnami/redis/mounted-etc
- name: redis-tmp-conf
mountPath: /opt/bitnami/redis/etc
volumes:
- name: start-scripts
configMap:
name: helm-redis-scripts
defaultMode: 0755
- name: health
configMap:
name: helm-redis-health
defaultMode: 0755
- name: config
configMap:
name: helm-redis-configuration
- name: redis-tmp-conf
emptyDir: { }
- name: redis-data
emptyDir: { }

View File

@@ -0,0 +1,496 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: helm-live-srs-cm
namespace: jxejpt
labels:
cmii.app: live-srs
cmii.type: live
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
helm.sh/chart: cmlc-live-srs-rtc-2.0.0
data:
srs.rtc.conf: |-
listen 31935;
max_connections 4096;
srs_log_tank console;
srs_log_level info;
srs_log_file /home/srs.log;
daemon off;
http_api {
enabled on;
listen 1985;
crossdomain on;
}
stats {
network 0;
}
http_server {
enabled on;
listen 8080;
dir /home/hls;
}
srt_server {
enabled on;
listen 30556;
maxbw 1000000000;
connect_timeout 4000;
peerlatency 600;
recvlatency 600;
}
rtc_server {
enabled on;
listen 30090;
candidate $CANDIDATE;
}
vhost __defaultVhost__ {
http_hooks {
enabled on;
on_publish http://helm-live-op-svc-v2:8080/hooks/on_push;
}
http_remux {
enabled on;
}
rtc {
enabled on;
rtmp_to_rtc on;
rtc_to_rtmp on;
keep_bframe off;
}
tcp_nodelay on;
min_latency on;
play {
gop_cache off;
mw_latency 100;
mw_msgs 10;
}
publish {
firstpkt_timeout 8000;
normal_timeout 4000;
mr on;
}
dvr {
enabled off;
dvr_path /home/dvr/[app]/[stream]/[2006][01]/[timestamp].mp4;
dvr_plan session;
}
hls {
enabled on;
hls_path /home/hls;
hls_fragment 10;
hls_window 60;
hls_m3u8_file [app]/[stream].m3u8;
hls_ts_file [app]/[stream]/[2006][01][02]/[timestamp]-[duration].ts;
hls_cleanup on;
hls_entry_prefix http://36.138.111.244:8088;
}
}
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-srs-svc-exporter
namespace: jxejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- name: rtmp
protocol: TCP
port: 30935
targetPort: 30935
nodePort: 31935
- name: rtc
protocol: UDP
port: 30090
targetPort: 30090
nodePort: 30090
- name: rtc-tcp
protocol: TCP
port: 30090
targetPort: 30090
nodePort: 30090
- name: srt
protocol: UDP
port: 30556
targetPort: 30556
nodePort: 30556
- name: api
protocol: TCP
port: 1985
targetPort: 1985
nodePort: 30080
selector:
srs-role: rtc
type: NodePort
sessionAffinity: None
externalTrafficPolicy: Cluster
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-srs-svc
namespace: jxejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
- name: api
protocol: TCP
port: 1985
targetPort: 1985
selector:
srs-role: rtc
type: ClusterIP
sessionAffinity: None
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-srsrtc-svc
namespace: jxejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- name: rtmp
protocol: TCP
port: 30935
targetPort: 30935
selector:
srs-role: rtc
type: ClusterIP
sessionAffinity: None
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: helm-live-srs-rtc
namespace: jxejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
cmii.app: live-srs
cmii.type: live
helm.sh/chart: cmlc-live-srs-rtc-2.0.0
srs-role: rtc
spec:
replicas: 1
selector:
matchLabels:
srs-role: rtc
template:
metadata:
labels:
srs-role: rtc
spec:
volumes:
- name: srs-conf-file
configMap:
name: helm-live-srs-cm
items:
- key: srs.rtc.conf
path: docker.conf
defaultMode: 420
- name: srs-vol
emptyDir:
sizeLimit: 8Gi
containers:
- name: srs-rtc
image: 10.20.1.135:8033/cmii/srs:v5.0.195
ports:
- name: srs-rtmp
containerPort: 30935
protocol: TCP
- name: srs-api
containerPort: 1985
protocol: TCP
- name: srs-flv
containerPort: 8080
protocol: TCP
- name: srs-webrtc
containerPort: 30090
protocol: UDP
- name: srs-webrtc-tcp
containerPort: 30090
protocol: TCP
- name: srs-srt
containerPort: 30556
protocol: UDP
env:
- name: CANDIDATE
value: 36.138.111.244
resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: srs-conf-file
mountPath: /usr/local/srs/conf/docker.conf
subPath: docker.conf
- name: srs-vol
mountPath: /home/dvr
subPath: jxejpt/helm-live/dvr
- name: srs-vol
mountPath: /home/hls
subPath: jxejpt/helm-live/hls
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
- name: oss-adaptor
image: 10.20.1.135:8033/cmii/cmii-srs-oss-adaptor:2023-SA
env:
- name: OSS_ENDPOINT
value: 'http://10.20.1.139:9000'
- name: OSS_AK
value: cmii
- name: OSS_SK
value: 'B#923fC7mk'
- name: OSS_BUCKET
value: live-cluster-hls
- name: SRS_OP
value: 'http://helm-live-op-svc-v2:8080'
- name: MYSQL_ENDPOINT
value: 'helm-mysql:3306'
- name: MYSQL_USERNAME
value: k8s_admin
- name: MYSQL_PASSWORD
value: fP#UaH6qQ3)8
- name: MYSQL_DATABASE
value: cmii_live_srs_op
- name: MYSQL_TABLE
value: live_segment
- name: LOG_LEVEL
value: info
- name: OSS_META
value: 'yes'
resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: srs-vol
mountPath: /cmii/share/hls
subPath: jxejpt/helm-live/hls
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: { }
imagePullSecrets:
- name: harborsecret
affinity: { }
schedulerName: default-scheduler
serviceName: helm-live-srsrtc-svc
podManagementPolicy: OrderedReady
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
revisionHistoryLimit: 10
---
# live-srs部分
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: helm-live-op-v2
namespace: jxejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
cmii.app: live-engine
cmii.type: live
helm.sh/chart: cmlc-live-live-op-2.0.0
live-role: op-v2
spec:
replicas: 1
selector:
matchLabels:
live-role: op-v2
template:
metadata:
labels:
live-role: op-v2
spec:
volumes:
- name: srs-conf-file
configMap:
name: helm-live-op-cm-v2
items:
- key: live.op.conf
path: bootstrap.yaml
defaultMode: 420
containers:
- name: helm-live-op-v2
image: 10.20.1.135:8033/cmii/cmii-live-operator:5.2.0
ports:
- name: operator
containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 4800m
memory: 4Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: srs-conf-file
mountPath: /cmii/bootstrap.yaml
subPath: bootstrap.yaml
livenessProbe:
httpGet:
path: /cmii/health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /cmii/health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
periodSeconds: 20
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: { }
imagePullSecrets:
- name: harborsecret
affinity: { }
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-op-svc-v2
namespace: jxejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30333
selector:
live-role: op-v2
type: NodePort
sessionAffinity: None
---
kind: Service
apiVersion: v1
metadata:
name: helm-live-op-svc
namespace: jxejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
selector:
live-role: op
type: ClusterIP
sessionAffinity: None
---
kind: ConfigMap
apiVersion: v1
metadata:
name: helm-live-op-cm-v2
namespace: jxejpt
labels:
octopus.control: wdd
app.kubernetes.io/managed-by: octopus
cmii.app: live-engine
cmii.type: live
data:
live.op.conf: |-
server:
port: 8080
spring:
main:
allow-bean-definition-overriding: true
allow-circular-references: true
application:
name: cmii-live-operator
platform:
info:
name: cmii-live-operator
description: cmii-live-operator
version: 6.0.0
scanPackage: com.cmii.live.op
cloud:
nacos:
config:
username: developer
password: N@cos14Good
server-addr: helm-nacos:8848
extension-configs:
- data-id: cmii-live-operator.yml
group: 6.0.0
refresh: true
shared-configs:
- data-id: cmii-backend-system.yml
group: 6.0.0
refresh: true
discovery:
enabled: false
live:
engine:
type: srs
endpoint: 'http://helm-live-srs-svc:1985'
proto:
rtmp: 'rtmp://36.138.111.244:31935'
rtsp: 'rtsp://36.138.111.244:30554'
srt: 'srt://36.138.111.244:30556'
flv: 'http://36.138.111.244:30500'
hls: 'http://36.138.111.244:30500'
rtc: 'webrtc://36.138.111.244:30090'
replay: 'https://36.138.111.244:30333'
minio:
endpoint: http://10.20.1.139:9000
access-key: cmii
secret-key: B#923fC7mk
bucket: live-cluster-hls

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More