Dolphinscheduler2.0.5 源码进行二开并重新编译打镜像

一、编译源代码

Github:dolphinscheduler

下载官方源码,阅读 /dolphinscheduler/docker/build/Dockerfile 配置文件:
https://dolphinscheduler.apache.org/zh-cn/download/download.html
https://www.apache.org/dyn/closer.lua/dolphinscheduler/2.0.5/apache-dolphinscheduler-2.0.5-src.tar.gz

由于本地IDEA没有安装Maven,所以,需要安装一下:

maven 安装

从Maven官方地址:http://maven.apache.org/download.cgi 下载最新版本 apache-maven-xxx-bin.tar.gz,解压之后放在/local位置。

编辑环境:

vim ~/.bash_profile
# java env
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_333.jdk/Contents/Home
export MAVEN_HOME=/Users/kaiyi/opt/javaenv/apache-maven-3.8.6
export CLASS_PATH=.:$JAVA_HOME/lib
export PATH=.:$PATH:$JAVA_HOME/lib:$MAVEN_HOME/bin

重新加载环境:

source ~/.bash_profile

接着你就可以查看maven的版本:

(base) ➜  test mvn -v
Apache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63)
Maven home: /Users/kaiyi/opt/javaenv/apache-maven-3.8.6
Java version: 1.8.0_333, vendor: Oracle Corporation, runtime: /Library/Java/JavaVirtualMachines/jdk1.8.0_333.jdk/Contents/Home/jre
Default locale: zh_CN, platform encoding: UTF-8
OS name: "mac os x", version: "12.4", arch: "x86_64", family: "mac"

file

How to Build

./mvnw clean install -Prelease

在IDEA 里边编译代码:
file

报如下错误:

...

[ERROR] COMPILATION ERROR : 
[INFO] -------------------------------------------------------------
[ERROR] No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK?
[INFO] 1 error
[INFO] -------------------------------------------------------------

这里编译报错,直接在系统环境进行编译:

(base) ➜   mvn -v                                                             
Apache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63)
Maven home: /Users/kaiyi/opt/javaenv/apache-maven-3.8.6
Java version: 1.8.0_333, vendor: Oracle Corporation, runtime: /Library/Java/JavaVirtualMachines/jdk1.8.0_333.jdk/Contents/Home/jre
Default locale: zh_CN, platform encoding: UTF-8
OS name: "mac os x", version: "12.4", arch: "x86_64", family: "mac"

(base) ➜  cd /Users/kaiyi/Work/develop/Code/apache-dolphinscheduler-2.0.5-src
(base) ➜   ./mvnw clean install -Prelease
...
jar to /Users/kaiyi/.m2/repository/org/apache/dolphinscheduler/dolphinscheduler-dist/2.0.5/dolphinscheduler-dist-2.0.5-sources.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for dolphinscheduler 2.0.5:
[INFO] 
[INFO] dolphinscheduler ................................... SUCCESS [  2.564 s]
[INFO] dolphinscheduler-spi ............................... SUCCESS [  7.117 s]
[INFO] dolphinscheduler-alert ............................. SUCCESS [  0.233 s]
[INFO] dolphinscheduler-alert-api ......................... SUCCESS [  1.177 s]
[INFO] dolphinscheduler-alert-plugins ..................... SUCCESS [  0.154 s]
[INFO] dolphinscheduler-alert-email ....................... SUCCESS [  4.218 s]
[INFO] dolphinscheduler-alert-wechat ...................... SUCCESS [  1.944 s]
[INFO] dolphinscheduler-alert-dingtalk .................... SUCCESS [  1.741 s]
[INFO] dolphinscheduler-alert-script ...................... SUCCESS [  1.891 s]
[INFO] dolphinscheduler-alert-http ........................ SUCCESS [  1.716 s]
[INFO] dolphinscheduler-alert-feishu ...................... SUCCESS [  1.746 s]
[INFO] dolphinscheduler-alert-slack ....................... SUCCESS [  1.758 s]
[INFO] dolphinscheduler-common ............................ SUCCESS [01:20 min]
[INFO] dolphinscheduler-remote ............................ SUCCESS [  8.988 s]
[INFO] dolphinscheduler-dao ............................... SUCCESS [  6.127 s]
[INFO] dolphinscheduler-alert-server ...................... SUCCESS [  2.643 s]
[INFO] dolphinscheduler-registry .......................... SUCCESS [  0.050 s]
[INFO] dolphinscheduler-registry-api ...................... SUCCESS [  0.978 s]
[INFO] dolphinscheduler-registry-plugins .................. SUCCESS [  0.064 s]
[INFO] dolphinscheduler-registry-zookeeper ................ SUCCESS [  6.358 s]
[INFO] dolphinscheduler-task-plugin ....................... SUCCESS [  0.062 s]
[INFO] dolphinscheduler-task-api .......................... SUCCESS [  2.272 s]
[INFO] dolphinscheduler-task-shell ........................ SUCCESS [  0.974 s]
[INFO] dolphinscheduler-datasource-plugin ................. SUCCESS [  0.053 s]
[INFO] dolphinscheduler-datasource-api .................... SUCCESS [ 15.086 s]
[INFO] dolphinscheduler-datasource-clickhouse ............. SUCCESS [  2.555 s]
[INFO] dolphinscheduler-datasource-db2 .................... SUCCESS [  2.323 s]
[INFO] dolphinscheduler-datasource-hive ................... SUCCESS [  3.706 s]
[INFO] dolphinscheduler-datasource-mysql .................. SUCCESS [  2.396 s]
[INFO] dolphinscheduler-datasource-oracle ................. SUCCESS [  2.351 s]
[INFO] dolphinscheduler-datasource-postgresql ............. SUCCESS [  2.397 s]
[INFO] dolphinscheduler-datasource-sqlserver .............. SUCCESS [  2.385 s]
[INFO] dolphinscheduler-datasource-all .................... SUCCESS [  0.541 s]
[INFO] dolphinscheduler-task-datax ........................ SUCCESS [  3.804 s]
[INFO] dolphinscheduler-task-flink ........................ SUCCESS [  1.013 s]
[INFO] dolphinscheduler-task-http ......................... SUCCESS [  1.059 s]
[INFO] dolphinscheduler-task-mr ........................... SUCCESS [  1.000 s]
[INFO] dolphinscheduler-task-python ....................... SUCCESS [  0.997 s]
[INFO] dolphinscheduler-task-spark ........................ SUCCESS [  1.055 s]
[INFO] dolphinscheduler-task-sql .......................... SUCCESS [  1.538 s]
[INFO] dolphinscheduler-task-sqoop ........................ SUCCESS [  1.381 s]
[INFO] dolphinscheduler-task-procedure .................... SUCCESS [  1.376 s]
[INFO] dolphinscheduler-task-pigeon ....................... SUCCESS [ 10.254 s]
[INFO] dolphinscheduler-ui ................................ SUCCESS [03:17 min]
[INFO] dolphinscheduler-service ........................... SUCCESS [ 11.564 s]
[INFO] dolphinscheduler-server ............................ SUCCESS [ 19.756 s]
[INFO] dolphinscheduler-api ............................... SUCCESS [ 11.222 s]
[INFO] dolphinscheduler-python ............................ SUCCESS [ 55.038 s]
[INFO] dolphinscheduler-standalone-server ................. SUCCESS [  2.087 s]
[INFO] dolphinscheduler-dist .............................. SUCCESS [03:42 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  11:53 min
[INFO] Finished at: 2022-10-15T18:49:58+08:00
[INFO] ------------------------------------------------------------------------

可以看到已经编译成功了,然后进入编译后的目录进行查看:

(base) ➜  target pwd
/Users/kaiyi/Work/develop/Code/apache-dolphinscheduler-2.0.5-src/dolphinscheduler-dist/target
(base) ➜  target ls -l
total 299632
-rw-r--r--  1 kaiyi  staff  150227580 Oct 15 18:49 apache-dolphinscheduler-2.0.5-bin.tar.gz
-rw-r--r--  1 kaiyi  staff      13044 Oct 15 18:49 apache-dolphinscheduler-2.0.5-sources.jar
-rw-r--r--  1 kaiyi  staff    2119086 Oct 15 18:49 apache-dolphinscheduler-2.0.5-src.tar.gz
-rw-r--r--  1 kaiyi  staff      13980 Oct 15 18:46 apache-dolphinscheduler-2.0.5.jar
drwxr-xr-x  2 kaiyi  staff         64 Oct 15 18:46 archive-tmp
-rw-r--r--  1 kaiyi  staff         87 Oct 15 18:46 checkstyle-cachefile
-rw-r--r--  1 kaiyi  staff      11598 Oct 15 18:46 checkstyle-checker.xml
-rw-r--r--  1 kaiyi  staff         81 Oct 15 18:46 checkstyle-result.xml
drwxr-xr-x  3 kaiyi  staff         96 Oct 15 18:46 classes
drwxr-xr-x  6 kaiyi  staff        192 Oct 15 18:46 dolphinscheduler-dist-2.0.5
-rw-r--r--  1 kaiyi  staff       6914 Oct 15 18:46 dolphinscheduler-dist-2.0.5.tar.gz
drwxr-xr-x  3 kaiyi  staff         96 Oct 15 18:46 generated-classes
drwxr-xr-x  7 kaiyi  staff        224 Oct 15 18:46 incremental
drwxr-xr-x  3 kaiyi  staff         96 Oct 15 18:49 javadoc-bundle-options
drwxr-xr-x  3 kaiyi  staff         96 Oct 15 18:46 maven-shared-archive-resources
drwxr-xr-x  4 kaiyi  staff        128 Oct 15 18:46 python
drwxr-xr-x  3 kaiyi  staff         96 Oct 15 18:46 test-classes

Artifact:

dolphinscheduler-dist/target/apache-dolphinscheduler-${latest.release.version}-bin.tar.gz: Binary package of DolphinScheduler
dolphinscheduler-dist/target/apache-dolphinscheduler-${latest.release.version}-src.tar.gz: Source code package of DolphinScheduler

部署测试

将已经编译好的安装包,部署测试是否OK

启动 DolphinScheduler Standalone Server

解压并启动 DolphinScheduler
二进制压缩包中有 standalone 启动的脚本,解压后即可快速启动。切换到有sudo权限的用户,运行脚本

# 解压并运行 Standalone Server
tar -xvzf apache-dolphinscheduler-*-bin.tar.gz
cd apache-dolphinscheduler-*-bin
sh ./bin/dolphinscheduler-daemon.sh start standalone-server

登录 DolphinScheduler
浏览器访问地址 http://localhost:12345/dolphinscheduler 即可登录系统UI。默认的用户名和密码是

admin/dolphinscheduler123

可以看到,重新编译的可以跑起来:
file

错误排查

如果在编译时,出现如下错误:

错误:找不到或无法加载主类 org.apache.maven.wrapper.MavenWrappermain

经过网上搜索查询资料之后,发现其是在当前用户的目录下没有.mvn和相应的jar文件;一句话就是说这个jar没有被安装到maven的类库中,所以无法启动此类,需要自行安装;

执行如下命令:

mvn -N io.takari:maven:wrapper

二、制作镜像(基础DS镜像)

1、将编译的安装包拷贝到docker编译目录

将上边编译好的二进制安装包拷贝到 /apache-dolphinscheduler-2.0.5-src/docker/build/ 目录下:

(base) ➜  cd /Users/kaiyi/Work/develop/Code/apache-dolphinscheduler-2.0.5-src/dolphinscheduler-dist/target
(base) ➜   ls -l
total 299632
drwxr-xr-x  15 kaiyi  staff        480 Oct 15 20:07 apache-dolphinscheduler-2.0.5-bin
-rw-r--r--   1 kaiyi  staff  150227580 Oct 15 18:49 apache-dolphinscheduler-2.0.5-bin.tar.gz
-rw-r--r--   1 kaiyi  staff      13044 Oct 15 18:49 apache-dolphinscheduler-2.0.5-sources.jar
-rw-r--r--   1 kaiyi  staff    2119086 Oct 15 18:49 apache-dolphinscheduler-2.0.5-src.tar.gz
-rw-r--r--   1 kaiyi  staff      13980 Oct 15 18:46 apache-dolphinscheduler-2.0.5.jar
drwxr-xr-x   2 kaiyi  staff         64 Oct 15 18:46 archive-tmp
-rw-r--r--   1 kaiyi  staff         87 Oct 15 18:46 checkstyle-cachefile
-rw-r--r--   1 kaiyi  staff      11598 Oct 15 18:46 checkstyle-checker.xml
-rw-r--r--   1 kaiyi  staff         81 Oct 15 18:46 checkstyle-result.xml
drwxr-xr-x   3 kaiyi  staff         96 Oct 15 18:46 classes
drwxr-xr-x   6 kaiyi  staff        192 Oct 15 18:46 dolphinscheduler-dist-2.0.5
-rw-r--r--   1 kaiyi  staff       6914 Oct 15 18:46 dolphinscheduler-dist-2.0.5.tar.gz
drwxr-xr-x   3 kaiyi  staff         96 Oct 15 18:46 generated-classes
drwxr-xr-x   7 kaiyi  staff        224 Oct 15 18:46 incremental
drwxr-xr-x   3 kaiyi  staff         96 Oct 15 18:49 javadoc-bundle-options
drwxr-xr-x   3 kaiyi  staff         96 Oct 15 18:46 maven-shared-archive-resources
drwxr-xr-x   4 kaiyi  staff        128 Oct 15 18:46 python
drwxr-xr-x   3 kaiyi  staff         96 Oct 15 18:46 test-classes

# 拷贝
cp apache-dolphinscheduler-2.0.5-bin.tar.gz  /Users/kaiyi/Work/develop/Code/apache-dolphinscheduler-2.0.5-src/docker/build/

file

2、镜像编译

Dockerfile 文件内容:

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

FROM openjdk:8-jre-slim-buster

ARG VERSION
ARG DEBIAN_FRONTEND=noninteractive

ENV TZ Asia/Shanghai
ENV LANG C.UTF-8
ENV DOCKER true
ENV DOLPHINSCHEDULER_HOME /opt/dolphinscheduler

# 1. install command/library/software
# If install slowly, you can replcae debian's mirror with new mirror, Example:
# RUN { \
#     echo "deb http://mirrors.tuna.tsinghua.edu.cn/debian/ buster main contrib non-free"; \
#     echo "deb http://mirrors.tuna.tsinghua.edu.cn/debian/ buster-updates main contrib non-free"; \
#     echo "deb http://mirrors.tuna.tsinghua.edu.cn/debian/ buster-backports main contrib non-free"; \
#     echo "deb http://mirrors.tuna.tsinghua.edu.cn/debian-security buster/updates main contrib non-free"; \
# } > /etc/apt/sources.list
RUN apt-get update && \
    apt-get install -y --no-install-recommends tzdata dos2unix python supervisor procps psmisc netcat sudo tini && \
    echo "Asia/Shanghai" > /etc/timezone && \
    rm -f /etc/localtime && \
    dpkg-reconfigure tzdata && \
    rm -rf /var/lib/apt/lists/* /tmp/*

# 2. add dolphinscheduler
ADD ./apache-dolphinscheduler-${VERSION}-bin.tar.gz /opt/
RUN ln -s -r /opt/apache-dolphinscheduler-${VERSION}-bin /opt/dolphinscheduler
WORKDIR /opt/apache-dolphinscheduler-${VERSION}-bin

# 3. add configuration and modify permissions and set soft links
COPY ./checkpoint.sh /root/checkpoint.sh
COPY ./startup-init-conf.sh /root/startup-init-conf.sh
COPY ./startup.sh /root/startup.sh
COPY ./conf/dolphinscheduler/*.tpl /opt/dolphinscheduler/conf/
COPY ./conf/dolphinscheduler/logback/* /opt/dolphinscheduler/conf/
COPY ./conf/dolphinscheduler/supervisor/supervisor.ini /etc/supervisor/conf.d/
COPY ./conf/dolphinscheduler/env/dolphinscheduler_env.sh.tpl /opt/dolphinscheduler/conf/env/
RUN sed -i 's/*.conf$/*.ini/' /etc/supervisor/supervisord.conf && \
    dos2unix /root/checkpoint.sh && \
    dos2unix /root/startup-init-conf.sh && \
    dos2unix /root/startup.sh && \
    dos2unix /opt/dolphinscheduler/script/*.sh && \
    dos2unix /opt/dolphinscheduler/bin/*.sh && \
    rm -f /bin/sh && \
    ln -s /bin/bash /bin/sh && \
    mkdir -p /tmp/xls && \
    echo PS1=\'\\w \\$ \' >> ~/.bashrc && \
    echo "Set disable_coredump false" >> /etc/sudo.conf

# 4. expose port
EXPOSE 5678 1234 12345 50051 50052

ENTRYPOINT ["/usr/bin/tini", "--", "/root/startup.sh"]

构建镜像:

构建新镜像:
# docker build -t apache/dolphinscheduler:2.0.5   .

docker build -t apache/dolphinscheduler:2.0.5 --build-arg VERSION=2.0.5    .

实际执行打印:

# 为了和原来的镜像版本有所区别,这里的版本号为:2.0.51
(base) ➜  build docker build -t apache/dolphinscheduler:2.0.51 --build-arg VERSION=2.0.5  . 
[+] Building 275.3s (18/18) FINISHED                                                                                                                  
 => [internal] load build definition from Dockerfile                                                                                             0.0s
 => => transferring dockerfile: 3.24kB                                                                                                           0.0s
 => [internal] load .dockerignore                                                                                                                0.0s
 => => transferring context: 2B                                                                                                                  0.0s
 => [internal] load metadata for docker.io/library/openjdk:8-jre-slim-buster                                                                     2.7s
 => [internal] load build context                                                                                                                1.5s
 => => transferring context: 150.31MB                                                                                                            1.5s
 => [ 1/13] FROM docker.io/library/openjdk:8-jre-slim-buster@sha256:12b19ef1470fa5f143830c10b747f1b824b646ebd423fb5348850f594ead6dcf           181.0s
 => => resolve docker.io/library/openjdk:8-jre-slim-buster@sha256:12b19ef1470fa5f143830c10b747f1b824b646ebd423fb5348850f594ead6dcf               0.0s
 => => sha256:12b19ef1470fa5f143830c10b747f1b824b646ebd423fb5348850f594ead6dcf 549B / 549B                                                       0.0s
 => => sha256:70305e31cd7f064e66f7a2fd154ec69f9bd9da49605fe029ffe785db1f691a74 1.16kB / 1.16kB                                                   0.0s
 => => sha256:05672d76967865028d40ea5a07a9ca58df6cb75bc6c7602d285c7bdd6058f14e 7.47kB / 7.47kB                                                   0.0s
 => => sha256:751ef25978b2971e15496369695ba51ed5b1b9aaca7e37b18a173d754d1ca820 27.14MB / 27.14MB                                               178.2s
 => => sha256:140e22108c7d39a72fc1f5f3ba4ffdd55836614e9c53175f5d43ada8b6bbaacc 3.27MB / 3.27MB                                                   1.1s
 => => sha256:64ab07c5523eec5b4894fe8c386d09509a48d5df9197cd11d82c8d688831b64a 211B / 211B                                                       0.3s
 => => sha256:19cc75812df4168018b793d2ff1dfb29cbf88f4a36d78333b9661757fbfe47a0 41.71MB / 41.71MB                                                16.5s
 => => extracting sha256:751ef25978b2971e15496369695ba51ed5b1b9aaca7e37b18a173d754d1ca820                                                        1.1s
 => => extracting sha256:140e22108c7d39a72fc1f5f3ba4ffdd55836614e9c53175f5d43ada8b6bbaacc                                                        0.2s
 => => extracting sha256:64ab07c5523eec5b4894fe8c386d09509a48d5df9197cd11d82c8d688831b64a                                                        0.0s
 => => extracting sha256:19cc75812df4168018b793d2ff1dfb29cbf88f4a36d78333b9661757fbfe47a0                                                        1.2s
 => [ 2/13] RUN apt-get update &&     apt-get install -y --no-install-recommends tzdata dos2unix python supervisor procps psmisc netcat sudo t  88.2s
 => [ 3/13] ADD ./apache-dolphinscheduler-2.0.5-bin.tar.gz /opt/                                                                                 1.9s 
 => [ 4/13] RUN ln -s -r /opt/apache-dolphinscheduler-2.0.5-bin /opt/dolphinscheduler                                                            0.3s 
 => [ 5/13] WORKDIR /opt/apache-dolphinscheduler-2.0.5-bin                                                                                       0.0s 
 => [ 6/13] COPY ./checkpoint.sh /root/checkpoint.sh                                                                                             0.0s 
 => [ 7/13] COPY ./startup-init-conf.sh /root/startup-init-conf.sh                                                                               0.0s 
 => [ 8/13] COPY ./startup.sh /root/startup.sh                                                                                                   0.0s 
 => [ 9/13] COPY ./conf/dolphinscheduler/*.tpl /opt/dolphinscheduler/conf/                                                                       0.0s
 => [10/13] COPY ./conf/dolphinscheduler/logback/* /opt/dolphinscheduler/conf/                                                                   0.0s
 => [11/13] COPY ./conf/dolphinscheduler/supervisor/supervisor.ini /etc/supervisor/conf.d/                                                       0.0s
 => [12/13] COPY ./conf/dolphinscheduler/env/dolphinscheduler_env.sh.tpl /opt/dolphinscheduler/conf/env/                                         0.0s
 => [13/13] RUN sed -i 's/*.conf$/*.ini/' /etc/supervisor/supervisord.conf &&     dos2unix /root/checkpoint.sh &&     dos2unix /root/startup-in  0.2s
 => exporting to image                                                                                                                           0.8s
 => => exporting layers                                                                                                                          0.8s
 => => writing image sha256:c83b4896f1023381f934ebf7a3abb4e7a7dc99490e81c8eb11995639c1415308                                                     0.0s
 => => naming to docker.io/apache/dolphinscheduler:2.0.51                                                                                        0.0s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them

file

三、制作mysql/python环境镜像(实战)

上边按照官方文档打了DS基础镜像,但是我们的DS部署在k8s上,需要连接mysql、python等环境,所以,还需要基于上述镜像将这些环境打进去;

1、打基础镜像

cd /Users/kaiyi/Work/develop/Code/dolphinscheduler-dev/dolphinscheduler-dist/target
cp apache-dolphinscheduler-2.0.5-bin.tar.gz  /Users/kaiyi/Work/develop/Code/dolphinscheduler-dev/docker/build/
cd  /Users/kaiyi/Work/develop/Code/dolphinscheduler-dev/docker/build/

#构建镜像:
#ms:multi-statement 多条语句执行
docker build -t apache/dolphinscheduler:2.0.5-ms --build-arg VERSION=2.0.5  .

# 镜像打标签,并推送到阿里云镜像仓库(官方镜像仓库太慢了,使用国内镜像仓库)
docker tag apache/dolphinscheduler:2.0.5-ms   registry.cn-hangzhou.aliyuncs.com/dev-mmp/dolphinscheduler:2.0.5-ms
docker login --username=407xxxx7@qq.com registry.cn-hangzhou.aliyuncs.com
docker push registry.cn-hangzhou.aliyuncs.com/dev-mmp/dolphinscheduler:2.0.5-ms

2、构建部署环境镜像

1、下载 MySQL 驱动包 mysql-connector-java-8.0.16.jar

cd /root/develop/softwares/ds
wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.16/mysql-connector-java-8.0.16.jar

2、创建一个新的 Dockerfile,用于添加 MySQL 的驱动包:
编写Dockerfile,这里添加Python的环境MiniConda

# FROM dolphinscheduler.docker.scarf.sh/apache/dolphinscheduler:2.0.5
# 使用新打的镜像
FROM registry.cn-hangzhou.aliyuncs.com/dev-mmp/dolphinscheduler:2.0.5-ms
COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib

# System packages
RUN sed -i s@/archive.ubuntu.com/@/mirrors.aliyun.com/@g /etc/apt/sources.list
RUN apt-get clean
RUN apt-get update && \
    apt-get install -y curl && \
    apt-get install -y expect && \
    apt-get install -y tar && \
        apt-get install -y vim && \
        apt-get install -y telnet && \
    apt-get install -y net-tools && \
        apt-get install -y iputils-ping 
# RUN apt-get update && apt-get install -yq curl wget jq vim

# python env
ARG CONDA_VER=4.12.0
ARG OS_TYPE=x86_64
ARG PY_VER=3.8
ARG PY_VER_CONDA=py38
ARG PANDAS_VER=1.3

# Use the above args
# ARG CONDA_VER
# ARG OS_TYPE
# ARG PY_VER_CONDA

# Install miniconda to /miniconda
# https://repo.anaconda.com/miniconda/Miniconda3-py38_4.12.0-Linux-x86_64.sh
RUN curl -LO "http://repo.continuum.io/miniconda/Miniconda3-${PY_VER_CONDA}_${CONDA_VER}-Linux-${OS_TYPE}.sh"
RUN bash Miniconda3-${PY_VER_CONDA}_${CONDA_VER}-Linux-${OS_TYPE}.sh -p /miniconda -b
RUN rm Miniconda3-${PY_VER_CONDA}_${CONDA_VER}-Linux-${OS_TYPE}.sh
ENV PATH=/miniconda/bin:${PATH}
RUN conda update -y conda
RUN conda init

# ARG PY_VER
# ARG PANDAS_VER
# Install packages from conda
RUN conda install -c anaconda -y python=${PY_VER}
RUN conda install -c anaconda -y \
    pandas=${PANDAS_VER}

3、构建一个包含 MySQL 驱动包及python环境的新镜像:

docker build -t apache/dolphinscheduler:mysql-py-driver .

执行打印:

[root@k8s-master ds]# docker build -t apache/dolphinscheduler:mysql-py-driver .
Sending build context to Docker daemon  2.297MB
Step 1/18 : FROM registry.cn-hangzhou.aliyuncs.com/dev-mmp/dolphinscheduler:2.0.5-ms
 ---> 6fa5406b8921
Step 2/18 : COPY mysql-connector-java-8.0.16.jar /opt/dolphinscheduler/lib
 ---> 17ce04d1b913
Step 3/18 : RUN sed -i s@/archive.ubuntu.com/@/mirrors.aliyun.com/@g /etc/apt/sources.list
 ---> Running in da1ce09044eb
Removing intermediate container da1ce09044eb
 ---> 1ab135da1592
Step 4/18 : RUN apt-get clean
 ---> Running in a0c19076b17e
Removing intermediate container a0c19076b17e
 ---> 720e47b48114
Step 5/18 : RUN apt-get update &&     apt-get install -y curl &&     apt-get install -y expect &&     apt-get install -y tar &&         apt-get install -y vim &&         apt-get install -y telnet &&     apt-get install -y net-tools &&         apt-get install -y iputils-ping
 ---> Running in f13595fa84b0
Get:1 http://deb.debian.org/debian buster InRelease [122 kB]

...

Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
Retrieving notices: ...working... done
Removing intermediate container e03ffb26a25c
 ---> 5b900ae754ec
Successfully built 5b900ae754ec
Successfully tagged apache/dolphinscheduler:mysql-py-driver
You have mail in /var/spool/mail/root

查看制作好的镜像:

[root@k8s-master ds]# docker images | grep 'dolphin'
apache/dolphinscheduler                                      mysql-py-driver   5b900ae754ec   4 minutes ago   2.54GB
registry.cn-hangzhou.aliyuncs.com/dev-mmp/dolphinscheduler   2.0.5-ms          6fa5406b8921   36 hours ago    405MB
[root@k8s-master ds]# 

将制作好的镜像推送到阿里云镜像仓库:

# 镜像打标签,并推送到阿里云镜像仓库
docker tag apache/dolphinscheduler:mysql-py-driver   registry.cn-hangzhou.aliyuncs.com/dev-mmp/dolphinscheduler:2.0.5-ms-mysql-py-driver
docker login --username=407xxxx7@qq.com registry.cn-hangzhou.aliyuncs.com
docker push registry.cn-hangzhou.aliyuncs.com/dev-mmp/dolphinscheduler:2.0.5-ms-mysql-py-driver

四、部署到k8s

1、下载安装包

wget --no-check-certificate https://dlcdn.apache.org/dolphinscheduler/2.0.5/apache-dolphinscheduler-2.0.5-src.tar.gz

$ tar -zxvf apache-dolphinscheduler-2.0.5-src.tar.gz
$ cd apache-dolphinscheduler-2.0.5-src/docker/kubernetes/dolphinscheduler

修改 values.yaml 文件中的 externalDatabase 配置 (尤其修改 host, username 和 password)

部署

$ helm install dolphinscheduler . -n mms
# 卸载
# $ helm uninstall dolphinscheduler -n mms

部署打印:

$ helm install dolphinscheduler . -n mms
** Please be patient while the chart DolphinScheduler 2.0.5 is being deployed **

Access DolphinScheduler UI URL by:

  kubectl port-forward -n mms svc/dolphinscheduler-api 12345:12345

  DolphinScheduler UI URL: http://127.0.0.1:12345/dolphinscheduler

查看pod:

[root@k8s-master ds]# kubectl get pods -n mms
NAME                                     READY   STATUS    RESTARTS   AGE
dolphinscheduler-alert-d547bc58f-z5dj8   1/1     Running   0          2m23s
dolphinscheduler-api-548b4b4c59-w87nj    1/1     Running   0          2m23s
dolphinscheduler-master-0                1/1     Running   0          2m23s
dolphinscheduler-master-1                1/1     Running   0          2m23s
dolphinscheduler-master-2                1/1     Running   0          2m22s
dolphinscheduler-worker-0                1/1     Running   0          2m23s
dolphinscheduler-worker-1                1/1     Running   0          2m23s
dolphinscheduler-worker-2                1/1     Running   0          2m22s
mysql-mms-69ff94c459-lgxzb               1/1     Running   0          4h20m
zk-0                                     1/1     Running   0          87m
zk-1                                     1/1     Running   0          87m
zk-2                                     1/1     Running   0          87m
[root@k8s-master ds]# 

服务对外访问:

 kubectl port-forward --address 192.168.1.189 -n mms svc/dolphinscheduler-api 12345:12345 # 使用 mms 命名空间

或者直接修改dolphinscheduler-api 暴露方式为:NodePort,暴露端口为:31234,(The range of valid ports is 30000-32767)

可以直接访问:http://192.168.1.189:31234/dolphinscheduler/

五、导出镜像

# 导出镜像
docker save > dolphinscheduler-2.0.5-ms-mysql-py.tar registry.cn-hangzhou.aliyuncs.com/dev-mmp/dolphinscheduler:2.0.5-ms-mysql-py-driver

# 加载镜像
# save对应的是load 使用 -i 导入
[root@dce88 ~]# docker load -i /opt/images.tar.gz

查看:

[root@k8s-master imgtar]# pwd
/k8s/softwares/imgtar
[root@k8s-master imgtar]# docker save > dolphinscheduler-2.0.5-ms-mysql-py.tar registry.cn-hangzhou.aliyuncs.com/dev-mmp/dolphinscheduler:2.0.5-ms-mysql-py-driver
[root@k8s-master imgtar]# ls -l
total 2523372
-rw-r--r--. 1 root root 2583932928 Oct 28 09:40 dolphinscheduler-2.0.5-ms-mysql-py.tar
[root@k8s-master imgtar]# ls -lh
total 2.5G
-rw-r--r--. 1 root root 2.5G Oct 28 09:40 dolphinscheduler-2.0.5-ms-mysql-py.tar
[root@k8s-master imgtar]#

六、附录

values.yaml文件:

# Default values for dolphinscheduler-chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

timezone: "Asia/Shanghai"

image:
  repository: "registry.cn-hangzhou.aliyuncs.com/dev-mmp/dolphinscheduler"
  tag: "2.0.5-ms-mysql-py-driver"
  pullPolicy: "IfNotPresent"
  pullSecret: ""

## If not exists external database, by default, Dolphinscheduler's database will use it.
postgresql:
  enabled: false
  postgresqlUsername: "root"
  postgresqlPassword: "root"
  postgresqlDatabase: "dolphinscheduler"
  persistence:
    enabled: false
    size: "20Gi"
    storageClass: "-"

## If exists external database, and set postgresql.enable value to false.
## external database will be used, otherwise Dolphinscheduler's database will be used.
externalDatabase:
  type: "mysql"
  driver: "com.mysql.jdbc.Driver"
  host: "mysql-mms.mms"
  port: "3306"
  username: "root"
  password: "ZEFeJtbhkE"
  database: "dolphinscheduler"
  params: "serverTimezone=Asia/Shanghai&characterEncoding=UTF-8&useSSL=false"

## If not exists external zookeeper, by default, Dolphinscheduler's zookeeper will use it.
zookeeper:
  enabled: false
  tickTime: 3000
  maxSessionTimeout: 60000
  initLimit: 300
  maxClientCnxns: 2000
  fourlwCommandsWhitelist: "srvr,ruok,wchs,cons"
  persistence:
    enabled: false
    size: "20Gi"
    storageClass: "-"
  zookeeperRoot: "/dolphinscheduler"

## If exists external zookeeper, and set zookeeper.enable value to false.
## If zookeeper.enable is false, Dolphinscheduler's zookeeper will use it.
externalZookeeper:
  zookeeperQuorum: "zk-cs:2181"
  zookeeperRoot: "/dolphinscheduler"
externalRegistry:
  registryPluginName: "zookeeper"
  registryServers: "zk-cs:2181"
common:
  ## Configmap
  configmap:
    DOLPHINSCHEDULER_OPTS: ""
    DATA_BASEDIR_PATH: "/tmp/dolphinscheduler"
    RESOURCE_STORAGE_TYPE: "HDFS"
    RESOURCE_UPLOAD_PATH: "/dolphinscheduler"
    FS_DEFAULT_FS: "file:///"
    FS_S3A_ENDPOINT: "s3.xxx.amazonaws.com"
    FS_S3A_ACCESS_KEY: "xxxxxxx"
    FS_S3A_SECRET_KEY: "xxxxxxx"
    HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE: "false"
    JAVA_SECURITY_KRB5_CONF_PATH: "/opt/krb5.conf"
    LOGIN_USER_KEYTAB_USERNAME: "hdfs@HADOOP.COM"
    LOGIN_USER_KEYTAB_PATH: "/opt/hdfs.keytab"
    KERBEROS_EXPIRE_TIME: "2"
    HDFS_ROOT_USER: "hdfs"
    RESOURCE_MANAGER_HTTPADDRESS_PORT: "8088"
    YARN_RESOURCEMANAGER_HA_RM_IDS: ""
    YARN_APPLICATION_STATUS_ADDRESS: "http://ds1:%s/ws/v1/cluster/apps/%s"
    YARN_JOB_HISTORY_STATUS_ADDRESS: "http://ds1:19888/ws/v1/history/mapreduce/jobs/%s"
    DATASOURCE_ENCRYPTION_ENABLE: "false"
    DATASOURCE_ENCRYPTION_SALT: "!@#$%^&*"
    SUDO_ENABLE: "true"
    # dolphinscheduler env
    HADOOP_HOME: "/opt/soft/hadoop"
    HADOOP_CONF_DIR: "/opt/soft/hadoop/etc/hadoop"
    SPARK_HOME1: "/opt/soft/spark1"
    SPARK_HOME2: "/opt/soft/spark2"
    PYTHON_HOME: "/usr/bin/python"
    JAVA_HOME: "/usr/local/openjdk-8"
    HIVE_HOME: "/opt/soft/hive"
    FLINK_HOME: "/opt/soft/flink"
    DATAX_HOME: "/opt/soft/datax"
    SESSION_TIMEOUT_MS: 60000
    ORG_QUARTZ_THREADPOOL_THREADCOUNT: "25"
    ORG_QUARTZ_SCHEDULER_BATCHTRIGGERACQUISTITIONMAXCOUNT: "1"
  ## Shared storage persistence mounted into api, master and worker, such as Hadoop, Spark, Flink and DataX binary package
  sharedStoragePersistence:
    enabled: false
    mountPath: "/opt/soft"
    accessModes:
    - "ReadWriteMany"
    ## storageClassName must support the access mode: ReadWriteMany
    storageClassName: "-"
    storage: "20Gi"
  ## If RESOURCE_STORAGE_TYPE is HDFS and FS_DEFAULT_FS is file:///, fsFileResourcePersistence should be enabled for resource storage
  fsFileResourcePersistence:
    enabled: false
    accessModes:
    - "ReadWriteMany"
    ## storageClassName must support the access mode: ReadWriteMany
    storageClassName: "-"
    storage: "20Gi"

master:
  ## PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down.
  podManagementPolicy: "Parallel"
  ## Replicas is the desired number of replicas of the given Template.
  replicas: "3"
  ## You can use annotations to attach arbitrary non-identifying metadata to objects.
  ## Clients such as tools and libraries can retrieve this metadata.
  annotations: {}
  ## Affinity is a group of affinity scheduling rules. If specified, the pod's scheduling constraints.
  ## More info: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#affinity-v1-core
  affinity: {}
  ## NodeSelector is a selector which must be true for the pod to fit on a node.
  ## Selector which must match a node's labels for the pod to be scheduled on that node.
  ## More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  nodeSelector: {}
  ## Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission,
  ## effectively unioning the set of nodes tolerated by the pod and the RuntimeClass.
  tolerations: []
  ## Compute Resources required by this container. Cannot be updated.
  ## More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container
  resources: {}
  # resources:
  #   limits:
  #     memory: "8Gi"
  #     cpu: "4"
  #   requests:
  #     memory: "2Gi"
  #     cpu: "500m"
  ## Configmap
  configmap:
    LOGGER_SERVER_OPTS: "-Xms512m -Xmx512m -Xmn256m"
    MASTER_SERVER_OPTS: "-Xms1g -Xmx1g -Xmn512m"
    MASTER_EXEC_THREADS: "100"
    MASTER_EXEC_TASK_NUM: "20"
    MASTER_DISPATCH_TASK_NUM: "3"
    MASTER_HOST_SELECTOR: "LowerWeight"
    MASTER_HEARTBEAT_INTERVAL: "10"
    MASTER_TASK_COMMIT_RETRYTIMES: "5"
    MASTER_TASK_COMMIT_INTERVAL: "1000"
    MASTER_MAX_CPULOAD_AVG: "-1"
    MASTER_RESERVED_MEMORY: "0.3"
    MASTER_FAILOVER_INTERVAL: 10
    MASTER_KILL_YARN_JOB_WHEN_HANDLE_FAILOVER: "true"
    ORG_QUARTZ_THREADPOOL_THREADCOUNT: "25"
    ORG_QUARTZ_SCHEDULER_BATCHTRIGGERACQUISTITIONMAXCOUNT: "1"
    MASTER_PERSIST_EVENT_STATE_THREADS: 10
  ## Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated.
  ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
  livenessProbe:
    enabled: true
    initialDelaySeconds: "30"
    periodSeconds: "30"
    timeoutSeconds: "5"
    failureThreshold: "3"
    successThreshold: "1"
  ## Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated.
  ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
  readinessProbe:
    enabled: true
    initialDelaySeconds: "30"
    periodSeconds: "30"
    timeoutSeconds: "5"
    failureThreshold: "3"
    successThreshold: "1"
  ## PersistentVolumeClaim represents a reference to a PersistentVolumeClaim in the same namespace.
  ## The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod.
  ## Every claim in this list must have at least one matching (by name) volumeMount in one container in the template.
  ## A claim in this list takes precedence over any volumes in the template, with the same name.
  persistentVolumeClaim:
    enabled: false
    accessModes:
    - "ReadWriteOnce"
    storageClassName: "-"
    storage: "20Gi"

worker:
  ## PodManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down.
  podManagementPolicy: "Parallel"
  ## Replicas is the desired number of replicas of the given Template.
  replicas: "3"
  ## You can use annotations to attach arbitrary non-identifying metadata to objects.
  ## Clients such as tools and libraries can retrieve this metadata.
  annotations: {}
  ## Affinity is a group of affinity scheduling rules. If specified, the pod's scheduling constraints.
  ## More info: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#affinity-v1-core
  affinity: {}
  ## NodeSelector is a selector which must be true for the pod to fit on a node.
  ## Selector which must match a node's labels for the pod to be scheduled on that node.
  ## More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  nodeSelector: {}
  ## Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission,
  ## effectively unioning the set of nodes tolerated by the pod and the RuntimeClass.
  tolerations: []
  ## Compute Resources required by this container. Cannot be updated.
  ## More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container
  resources: {}
  # resources:
  #   limits:
  #     memory: "8Gi"
  #     cpu: "4"
  #   requests:
  #     memory: "2Gi"
  #     cpu: "500m"
  ## Configmap
  configmap:
    LOGGER_SERVER_OPTS: "-Xms512m -Xmx512m -Xmn256m"
    WORKER_SERVER_OPTS: "-Xms1g -Xmx1g -Xmn512m"
    WORKER_EXEC_THREADS: "100"
    WORKER_HEARTBEAT_INTERVAL: "10"
    WORKER_HOST_WEIGHT: "100"
    WORKER_MAX_CPULOAD_AVG: "-1"
    WORKER_RESERVED_MEMORY: "0.3"
    WORKER_GROUPS: "default"
    WORKER_RETRY_REPORT_TASK_STATUS_INTERVAL: 600
  ## Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated.
  ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
  livenessProbe:
    enabled: true
    initialDelaySeconds: "30"
    periodSeconds: "30"
    timeoutSeconds: "5"
    failureThreshold: "3"
    successThreshold: "1"
  ## Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated.
  ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
  readinessProbe:
    enabled: true
    initialDelaySeconds: "30"
    periodSeconds: "30"
    timeoutSeconds: "5"
    failureThreshold: "3"
    successThreshold: "1"
  ## PersistentVolumeClaim represents a reference to a PersistentVolumeClaim in the same namespace.
  ## The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod.
  ## Every claim in this list must have at least one matching (by name) volumeMount in one container in the template.
  ## A claim in this list takes precedence over any volumes in the template, with the same name.
  persistentVolumeClaim:
    enabled: false
    ## dolphinscheduler data volume
    dataPersistentVolume:
      enabled: false
      accessModes:
      - "ReadWriteOnce"
      storageClassName: "-"
      storage: "20Gi"
    ## dolphinscheduler logs volume
    logsPersistentVolume:
      enabled: false
      accessModes:
      - "ReadWriteOnce"
      storageClassName: "-"
      storage: "20Gi"

alert:
  ## Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1.
  replicas: "1"
  ## The deployment strategy to use to replace existing pods with new ones.
  strategy:
    type: "RollingUpdate"
    rollingUpdate:
      maxSurge: "25%"
      maxUnavailable: "25%"
  ## You can use annotations to attach arbitrary non-identifying metadata to objects.
  ## Clients such as tools and libraries can retrieve this metadata.
  annotations: {}
  ## NodeSelector is a selector which must be true for the pod to fit on a node.
  ## Selector which must match a node's labels for the pod to be scheduled on that node.
  ## More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  affinity: {}
  ## Compute Resources required by this container. Cannot be updated.
  ## More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container
  nodeSelector: {}
  ## Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission,
  ## effectively unioning the set of nodes tolerated by the pod and the RuntimeClass.
  tolerations: []
  ## Affinity is a group of affinity scheduling rules. If specified, the pod's scheduling constraints.
  ## More info: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#affinity-v1-core
  resources: {}
  # resources:
  #   limits:
  #     memory: "2Gi"
  #     cpu: "1"
  #   requests:
  #     memory: "1Gi"
  #     cpu: "500m"
  ## Configmap
  configmap:
    ALERT_SERVER_OPTS: "-Xms512m -Xmx512m -Xmn256m"
  ## Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated.
  ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
  livenessProbe:
    enabled: true
    initialDelaySeconds: "30"
    periodSeconds: "30"
    timeoutSeconds: "5"
    failureThreshold: "3"
    successThreshold: "1"
  ## Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated.
  ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
  readinessProbe:
    enabled: true
    initialDelaySeconds: "30"
    periodSeconds: "30"
    timeoutSeconds: "5"
    failureThreshold: "3"
    successThreshold: "1"
  ## PersistentVolumeClaim represents a reference to a PersistentVolumeClaim in the same namespace.
  ## More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
  persistentVolumeClaim:
    enabled: false
    accessModes:
    - "ReadWriteOnce"
    storageClassName: "-"
    storage: "20Gi"

api:
  ## Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1.
  replicas: "1"
  ## The deployment strategy to use to replace existing pods with new ones.
  strategy:
    type: "RollingUpdate"
    rollingUpdate:
      maxSurge: "25%"
      maxUnavailable: "25%"
  ## You can use annotations to attach arbitrary non-identifying metadata to objects.
  ## Clients such as tools and libraries can retrieve this metadata.
  annotations: {}
  ## NodeSelector is a selector which must be true for the pod to fit on a node.
  ## Selector which must match a node's labels for the pod to be scheduled on that node.
  ## More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  affinity: {}
  ## Compute Resources required by this container. Cannot be updated.
  ## More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container
  nodeSelector: {}
  ## Tolerations are appended (excluding duplicates) to pods running with this RuntimeClass during admission,
  ## effectively unioning the set of nodes tolerated by the pod and the RuntimeClass.
  tolerations: []
  ## Affinity is a group of affinity scheduling rules. If specified, the pod's scheduling constraints.
  ## More info: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#affinity-v1-core
  resources: {}
  # resources:
  #   limits:
  #     memory: "2Gi"
  #     cpu: "1"
  #   requests:
  #     memory: "1Gi"
  #     cpu: "500m"
  ## Configmap
  configmap:
    API_SERVER_OPTS: "-Xms512m -Xmx512m -Xmn256m"
  ## Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated.
  ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
  livenessProbe:
    enabled: true
    initialDelaySeconds: "30"
    periodSeconds: "30"
    timeoutSeconds: "5"
    failureThreshold: "3"
    successThreshold: "1"
  ## Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated.
  ## More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
  readinessProbe:
    enabled: true
    initialDelaySeconds: "30"
    periodSeconds: "30"
    timeoutSeconds: "5"
    failureThreshold: "3"
    successThreshold: "1"
  ## PersistentVolumeClaim represents a reference to a PersistentVolumeClaim in the same namespace.
  ## More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
  persistentVolumeClaim:
    enabled: false
    accessModes:
    - "ReadWriteOnce"
    storageClassName: "-"
    storage: "20Gi"
  service:
    ## type determines how the Service is exposed. Defaults to ClusterIP. Valid options are ExternalName, ClusterIP, NodePort, and LoadBalancer
        # 这里暴露NodePort
    type: "NodePort"
    ## clusterIP is the IP address of the service and is usually assigned randomly by the master
    clusterIP: ""
    ## nodePort is the port on each node on which this service is exposed when type=NodePort
        # 这里暴露端口31234
    nodePort: "31234"
    ## externalIPs is a list of IP addresses for which nodes in the cluster will also accept traffic for this service
    externalIPs: []
    ## externalName is the external reference that kubedns or equivalent will return as a CNAME record for this service, requires Type to be ExternalName
    externalName: ""
    ## loadBalancerIP when service.type is LoadBalancer. LoadBalancer will get created with the IP specified in this field
    loadBalancerIP: ""
    ## annotations may need to be set when service.type is LoadBalancer
    ## service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:EXAMPLE_CERT
    annotations: {}

ingress:
  enabled: false
  host: "dolphinscheduler.org"
  path: "/dolphinscheduler"
  tls:
    enabled: false
    secretName: "dolphinscheduler-tls"

相关文章:
mac配置idea maven
DS 单机部署文档
dolphinscheduler-2.0.4-dev 二次开发环境搭建
Kubernetes 部署 DolphinScheduler 集群

为者常成,行者常至