安装 配置helm 安装helm
wget https://get.helm.sh/helm-v3.15.3-linux-amd64.tar.gz tar xf helm-v3.15.3-linux-amd64.tar.gz cp linux-amd64/helm /usr/local/bin/
添加代理
export https_proxy=http://192.168.2.1:7890;export http_proxy=http://192.168.2.1:7890;export all_proxy=socks5://192.168.2.1:7890
添加milvus helm源
helm repo add milvus https://zilliztech.github.io/milvus-helm/ helm repo update
离线安装milvus helm地址:https://github.com/zilliztech/milvus-helm.git
查看当前版本
helm search repo milvus --versions WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /root/.kube/config NAME CHART VERSION APP VERSION DESCRIPTION milvus/milvus 4.2.4 2.4.7 Milvus is an open-source vector database built ... milvus/minio 8.0.17 master High Performance, Kubernetes Native Object Storage
获取milvus_manifest.yaml
helm template my-release milvus/milvus --version 4.2.4 > milvus_manifest.yaml
指定namespace
helm template my-release milvus/milvus --version 4.1.28 --namespace milvus -f custom-values.yaml > milvus_manifest.yaml
helm template my-release milvus/milvus \ --version 4.2.4 \ --namespace milvus \ --set nodeSelector.node=milvus \ --set attu.enabled=true \ --set minio.accessKey=minioadmin \ --set minio.secretKey=123456 \ --set minio.enabled=true \ --set ingress.enabled=true \ --set ingress.rules[0].host="milvus-web.milvus.com" \ > milvus_manifest.yaml
下载download image脚本
wget https://raw.githubusercontent.com/milvus-io/milvus/master/deployments/offline/requirements.txt wget https://raw.githubusercontent.com/milvus-io/milvus/master/deployments/offline/save_image.py
下载docker镜像
pip3 install -r requirements.txt python3 save_image.py --manifest milvus_manifest.yaml
加载镜像
cd image for image in $(find . -type f -name "*.tar.gz") ; do gunzip -c $image | docker load; done
上传镜像到harbor
#创建项目,分别创建milvusdb、apachepulsar、minio项目 curl -X POST -H "Authorization: Basic $(echo -n 'admin:xxxxxxxx' | base64)" -H "Content-Type: application/json" -d '{"project_name": "milvusdb","public": false,"storage_limit": -1}' "192.168.1.11:81/api/v2.0/projects" #上传镜像到harbor for i in `docker images|grep -E "^milvusdb|^apachepulsar|^minio"|awk '{print $1":"$2}'`;do docker tag $i 192.168.1.11:81/$i && docker push 192.168.1.11:81/$i;done
修改镜像地址为harbor仓库
# 替换 milvusdb/milvus 镜像 sed -i 's,\(milvusdb\/milvus\):,192.168.1.11:81\/\1:,g' milvus_manifest.yaml # 替换 etcd 镜像 sed -i 's,\(docker\.io\/milvusdb\/etcd\):,192.168.1.11:81\/\1:,g' milvus_manifest.yaml # 替换 minio 镜像 sed -i 's,\(minio\/minio\):,192.168.1.11:81\/\1:,g' milvus_manifest.yaml # 替换 pulsar 镜像 sed -i 's,\(apachepulsar\/pulsar\):,192.168.1.11:81\/\1:,g' milvus_manifest.yaml
如果设置etcd使用hostpath设置目录权限
sudo chown -R 1001:1001 /opt/milvus/etcd
安装milvus
kubectl apply -f milvus_manifest.yaml
额外配置 修改minio默认密码 编辑milvus_manifest.yaml
apiVersion: v1 kind: Secret metadata: name: my-release-minio labels: app: minio chart: minio-8.0.17 release: my-release heritage: Helm type: Opaque data: # 使用base64加密 accesskey: "bWluaW9hZG1pbg==" secretkey: "MTIzNDU2Nzg="
apiVersion: v1 kind: ConfigMap metadata: name: my-release-milvus data: default.yaml: |+ minio: address: my-release-minio port: 9000 accessKeyID: minioadmin # secretAccessKey和secretkey需要保持一致 secretAccessKey: 12345678 useSSL: false bucketName: milvus-bucket rootPath: file useIAM: false cloudProvider: aws iamEndpoint: region: useVirtualHost: false
minio使用StorageClass 编辑milvus_manifest.yaml
apiVersion: apps/v1 kind: StatefulSet metadata: name: my-release-minio labels: app: minio chart: minio-8.0.17 release: my-release heritage: Helm spec: volumeClaimTemplates: - metadata: name: export spec: storageClassName: nfs # 指定StorageClass名称为nfs accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 2560Gi
etcd使用hostpath 由于etcd持久化存储到nfs中存在性能不足的问题,所以需要将etcd的数据使用hostpath的方式映射到宿主机
volumeMounts: - name: data mountPath: /bitnami/etcd volumes: - name: data hostPath: path: /opt/milvus/etcd type: DirectoryOrCreate #volumeClaimTemplates: # - metadata: # name: data # spec: # accessModes: # - "ReadWriteOnce" # resources: # requests: # storage: "10Gi"
开启milvus认证 编辑milvus_manifest.yaml
apiVersion: v1 kind: ConfigMap metadata: name: my-release-milvus namespace: default data: user.yaml: |+ common: security: authorizationEnabled: true defaultRootPassword: 123456
开启认证默认权限:root/Milvus
配置minio nginx代理 编辑/etc/nginx/nginx.conf
#将nginx代理的所有请求实体的大小限制为1024M client_max_body_size 10240M;
编辑/etc/nginx/conf.d/minio.conf
server { listen 9000; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_connect_timeout 300; # Default is HTTP/1, keepalive is only enabled in HTTP/1.1 proxy_http_version 1.1; proxy_set_header Connection ""; chunked_transfer_encoding off; proxy_pass http://10.43.82.8:9000; } }
参考链接:https://minio.org.cn/docs/minio/linux/integrations/setup-nginx-proxy-with-minio.html
已知问题 my-release-milvus-rootcoord启动失败 my-release-milvus-rootcoord启动失败,错误日志如下:
[2024/10/18 11:13:51.532 +00:00] [ERROR] [msgstream/mq_msgstream.go:138] ["retry func failed"] ["retry time"=4] [error="no partitioned metadata for topic{public/milvus-test/by-dev-rootcoord-dml_0} in lookup response"]
解决思路:检查my-release-pulsar-broker-0
pod
中提示Policies not found for public/milvus-test namespace
,问题原因zk没有创建public/milvus-test namespace
,在zk中查询默认的namespace
root@my-release-pulsar-zookeeper-0:/pulsar# bin/pulsar-admin --admin-url http://my-release-pulsar-broker:8080 namespaces list public "public/default"
创建namespace
,即可恢复服务
bin/pulsar-admin --admin-url http://my-release-pulsar-broker:8080 namespaces create public/milvus-test
my-release-pulsar-zookeeper启动失败 zk日志报错:
java.net.UnknownHo.stException:my-release-pulsar-zookeeper-2.my-release-pulsar-zookeeper.milvus-gpu.svc.cluster.local
发现只有两个pod启动,但是my-release-pulsar-zookeeper-2 pod并没有被创建,清理zk数据目录重新部署
修改configmap my-release-pulsar-bookie
PULSAR_MEM: | -Xms8192m -Xmx8192m -XX:MaxDirectMemorySize=16384m dbStorage_readAheadCacheMaxSizeMb: "64" dbStorage_rocksDB_blockCacheSize: "8388608" dbStorage_rocksDB_writeBufferSizeMB: "16" dbStorage_writeCacheMaxSizeMb: "64" nettyMaxFrameSizeBytes: "104867840"