【VMware VCF】VMware Cloud Foundation Part 05:部署 SDDC 管理域。
之前文章(“VMware Cloud Foundation Part 03:准备 Excel 参数表。”和“VMware Cloud Foundation Part 04:准备 ESXi 主机。”),我们已经知道了对于部署一个 VMware Cloud Foundation 来说,必要准备部署参数设置文件以及用于部署管理域的 ESXi 主机,这在前期准备当中确实必要花费大量时间和精力,不过等这一切都准备停当后,到了真正的实行环节,也许几个小时之内就能完成所有部署上线工作,这就是使用 VMware Cloud Foundation 主动化和标准化 SDDC 解决方案所带来的魅力。话不多说,下面正式进入主题。
一、Cloud Builder 使用本领
可能有一些小本领对于 Cloud Builder 工具的使用能带来帮助。工欲善其事,必先利其器。
1)查看 Log 日志文件
在 Cloud Builder 部署 VCF 管理域过程中,有可能会碰到一些报错大概失败的任务,这时可以查看以下 Cloud Builder 中的 Log 日志文件来检查具体错误的原因。SSH 以 admin 用户登录到 Cloud Builder 并切换到 root 用户,执行以下下令。
tail -f /var/log/vmware/vcf/bringup/vcf-bringup.log
tail -f /var/log/vmware/vcf/bringup/vcf-bringup-debug.log2)开启 History 历史记录
默认环境下,Cloud Builder 虚拟机的 History 下令历史记录功能是关闭的,如果你想查看之前使用过的下令并向上翻阅历史记录,将会失败。如果想开启 History 功能,可以移除关闭历史记录的设置文件。SSH 以 admin 用户登录到 Cloud Builder 并切换到 root 用户,执行以下下令。
mv /etc/profile.d/disable.history.sh .
history3)重置 Postgres 数据库
当使用 Cloud Builder 部署完成 VCF 管理域以后,终极会显示如下图所示界面。如果你想继承使用 Cloud Builder 重新部署 VCF 管理域大概部署另外一个 VCF 实例,再去访问 Cloud Builder 时始终会停留在下图所示的界面。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725114518993-1197516380.png
要想再次使用 Cloud Builder,可以重置 Postgres 数据库。SSH 以 admin 用户登录到 Cloud Builder 并切换到 root 用户,执行以下下令。
/usr/pgsql/13/bin/psql -U postgres -d bringup -h localhost
delete from execution;
delete from "Resource";
\q二、vSAN ESA HCL 自定义文件
当你跟我一样使用了嵌套 ESXi 虚拟机来部署 VMware Cloud Foundation 时,如果你选择使用 vSAN OSA 架构来部署 VCF 管理域,那么在部署的时候不会碰到 HCL 兼容性题目,因为不会去检查 HCL JSON 文件;但是,要是你部署 vSAN ESA 架构,并使用官方的 HCL JSON 文件(https://partnerweb.vmware.com/service/vsan/all.json)的话,那一定会碰到兼容性题目,ESXi host vSAN compatibility validation 检查将会失败(Failed to verify HCL status on ESXi Host vcf-mgmt01-esxi01.mulab.local),如下图所示。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725160032397-760366709.png
1)生成嵌套 ESXi 主机的自定义 HCL JSON 文件
针对上述这个题目,可以使用 VMware 工程师 William Lam 所制作的 PowerCLI 脚本来生成自定义 HCL JSON 文件举行解决,脚本完备内容如下。很故意思的是,这个方法是不是也可以用于嵌套环境中 vSAN ESA 集群的部署和使用基于映像的 vLCM 生命周期管理所碰到的硬件兼容性题目?!注意,你必要安装 PowerCLI 环境,才能执行以下步骤。
# Author: William Lam
# Description: Dynamically generate custom vSAN ESA HCL JSON file connected to standalone ESXi host
$vmhost = Get-VMHost
$supportedESXiReleases = @("ESXi 8.0 U2")
Write-Host -ForegroundColor Green "`nCollecting SSD information from ESXi host ${vmhost} ... "
$imageManager = Get-View ($Vmhost.ExtensionData.ConfigManager.ImageConfigManager)
$vibs = $imageManager.fetchSoftwarePackages()
$storageDevices = $vmhost.ExtensionData.Config.StorageDevice.scsiTopology.Adapter
$storageAdapters = $vmhost.ExtensionData.Config.StorageDevice.hostBusAdapter
$devices = $vmhost.ExtensionData.Config.StorageDevice.scsiLun
$pciDevices = $vmhost.ExtensionData.Hardware.PciDevice
$ctrResults = @()
$ssdResults = @()
$seen = @{}
foreach ($storageDevice in $storageDevices) {
$targets = $storageDevice.target
if($targets -ne $null) {
foreach ($target in $targets) {
foreach ($ScsiLun in $target.Lun.ScsiLun) {
$device = $devices | where {$_.Key -eq $ScsiLun}
$storageAdapter = $storageAdapters | where {$_.Key -eq $storageDevice.Adapter}
$pciDevice = $pciDevices | where {$_.Id -eq $storageAdapter.Pci}
# Convert from Dec to Hex
$vid = ('{0:x}' -f $pciDevice.VendorId).ToLower()
$did = ('{0:x}' -f $pciDevice.DeviceId).ToLower()
$svid = ('{0:x}' -f $pciDevice.SubVendorId).ToLower()
$ssid = ('{0:x}' -f $pciDevice.SubDeviceId).ToLower()
$combined = "${vid}:${did}:${svid}:${ssid}"
if($storageAdapter.Driver -eq "nvme_pcie" -or $storageAdapter.Driver -eq "pvscsi") {
switch ($storageAdapter.Driver) {
"nvme_pcie" {
$controllerType = $storageAdapter.Driver
$controllerDriver = ($vibs | where {$_.name -eq "nvme-pcie"}).Version
}
"pvscsi" {
$controllerType = $storageAdapter.Driver
$controllerDriver = ($vibs | where {$_.name -eq "pvscsi"}).Version
}
}
$ssdReleases=@{}
foreach ($supportedESXiRelease in $supportedESXiReleases) {
$tmpObj = @{
vsanSupport = @( "All Flash:","vSANESA-SingleTier")
$controllerType = @{
$controllerDriver = @{
firmwares = @(
@{
firmware = $device.Revision
vsanSupport = @{
tier = @("AF-Cache", "vSANESA-Singletier")
mode = @("vSAN", "vSAN ESA")
}
}
)
type = "inbox"
}
}
}
if(!$ssdReleases[$supportedESXiRelease]) {
$ssdReleases.Add($supportedESXiRelease,$tmpObj)
}
}
if($device.DeviceType -eq "disk" -and !$seen[$combined]) {
$ssdTmp = @{
id = $(Get-Random -Minimum 1000 -Maximum 50000).toString()
did = $did
vid = $vid
ssid = $ssid
svid = $svid
vendor = $device.Vendor
model = ($device.Model).trim()
devicetype = $device.ApplicationProtocol
partnername = $device.Vendor
productid = ($device.Model).trim()
partnumber = $device.SerialNumber
capacity = ((($device.Capacity.BlockSize * $device.Capacity.Block) / 1048576))
vcglink = "https://williamlam.com/homelab"
releases = $ssdReleases
vsanSupport = @{
mode = @("vSAN", "vSAN ESA")
tier = @("vSANESA-Singletier", "AF-Cache")
}
}
$controllerReleases=@{}
foreach ($supportedESXiRelease in $supportedESXiReleases) {
$tmpObj = @{
$controllerType = @{
$controllerDriver = @{
type = "inbox"
queueDepth = $device.QueueDepth
firmwares = @(
@{
firmware = $device.Revision
vsanSupport = @( "Hybrid:Pass-Through","All Flash:Pass-Through","vSAN ESA")
}
)
}
}
vsanSupport = @( "Hybrid:Pass-Through","All Flash:Pass-Through")
}
if(!$controllerReleases[$supportedESXiRelease]) {
$controllerReleases.Add($supportedESXiRelease,$tmpObj)
}
}
$controllerTmp = @{
id = $(Get-Random -Minimum 1000 -Maximum 50000).toString()
releases = $controllerReleases
}
$ctrResults += $controllerTmp
$ssdResults += $ssdTmp
$seen[$combined] = "yes"
}
}
}
}
}
}
# Retrieve the latest vSAN HCL jsonUpdatedTime
$results = Invoke-WebRequest -Uri 'https://vsanhealth.vmware.com/products/v1/bundles/lastupdatedtime' -Headers @{'x-vmw-esp-clientid'='vsan-hcl-vcf-2023'}
# Parse out content between '{...}'
$pattern = '\{(.+?)\}'
$matched = (::Matches($results, $pattern)).Value
if($matched -ne $null) {
$vsanHclTime = $matched|ConvertFrom-Json
} else {
Write-Error "Unable to retrieve vSAN HCL jsonUpdatedTime, ensure you have internet connectivity when running this script"
}
$hclObject = @{
timestamp = $vsanHclTime.timestamp
jsonUpdatedTime = $vsanHclTime.jsonUpdatedTime
totalCount = $($ssdResults.count + $ctrResults.count)
supportedReleases = $supportedESXiReleases
eula = @{}
data = @{
controller = @($ctrResults)
ssd = @($ssdResults)
hdd = @()
}
}
$dateTimeGenerated = Get-Date -Uformat "%m_%d_%Y_%H_%M_%S"
$outputFileName = "custom_vsan_esa_hcl_${dateTimeGenerated}.json"
Write-Host -ForegroundColor Green "Saving Custom vSAN ESA HCL to ${outputFileName}`n"
$hclObject | ConvertTo-Json -Depth 12 | Out-File -FilePath $outputFileName运行 Powershell,使用 PowerCLI 下令 Connect-VISserver 毗连到嵌套 ESXi 主机,并运行自定义 HCL JSON 文件生成脚本。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725142558870-1933306819.png
生成的自定义 HCL JSON 文件内容如下所示。注意,运行上述脚本必要电脑毗连互联网,如果不能连网,则必要手动下载官方的 HCL JSON 文件(https://partnerweb.vmware.com/service/vsan/all.json),然后将 timestamp 和 jsonUpdatedTime 字段的值修改为官方的 HCL JSON 文件中的最新值。
{
"timestamp":1721122728,
"jsonUpdatedTime":"July 16, 2024, 2:38 AM PDT",
"totalCount":2,
"supportedReleases":[
"ESXi 8.0 U2"
],
"eula":{
},
"data":{
"controller":[
{
"id":33729,
"releases":{
"ESXi 8.0 U2":{
"nvme_pcie":{
"1.2.4.11-1vmw.802.0.0.22380479":{
"type":"inbox",
"queueDepth":510,
"firmwares":[
{
"firmware":"1.3",
"vsanSupport":[
"Hybrid:Pass-Through",
"All Flash:Pass-Through",
"vSAN ESA"
]
}
]
}
},
"vsanSupport":[
"Hybrid:Pass-Through",
"All Flash:Pass-Through"
]
}
}
}
],
"ssd":[
{
"id":25674,
"did":"7f0",
"vid":"15ad",
"ssid":"7f0",
"svid":"15ad",
"vendor":"NVMe",
"model":"VMware Virtual NVMe Disk",
"devicetype":"NVMe",
"partnername":"NVMe",
"productid":"VMware Virtual NVMe Disk",
"partnumber":"f72c2cf6551ae47e000c2968afc4b0ec",
"capacity":61440,
"vcglink":"https://williamlam.com/homelab",
"releases":{
"ESXi 8.0 U2":{
"vsanSupport":[
"All Flash:",
"vSANESA-SingleTier"
],
"nvme_pcie":{
"1.2.4.11-1vmw.802.0.0.22380479":{
"firmwares":[
{
"firmware":"1.3",
"vsanSupport":{
"tier":[
"AF-Cache",
"vSANESA-Singletier"
],
"mode":[
"vSAN",
"vSAN ESA"
]
}
}
],
"type":"inbox"
}
}
}
},
"vsanSupport":{
"mode":[
"vSAN",
"vSAN ESA"
],
"tier":[
"vSANESA-Singletier",
"AF-Cache"
]
}
}
],
"hdd":[
]
}
}2)重新另存为 HCL JSON 文件
很奇怪,不知道为什么上面主动生成的 HCL JSON 文件我这边直接使用有题目,我将生成的 HCL JSON 文件通过记事本打开,然后全部复制到另一个记事本中,再另存为 JSON 文件(如 all.json),最后导入到 Cloud Builder 才验证乐成。如果你碰到同样的题目,可以尝试这一操作。
3)上传 HCL JSON 文件到 Cloud Builder
使用上面脚本生成了嵌套 ESXi 主机的自定义 HCL JSON 文件后,必要通过 SFTP 将它上传到 Cloud Builder,同时必要在 Excel 参数表中设置 HCL JSON 文件的路径,后续在部署管理域的时候必要使用。
mv /home/admin/all.json /opt/vmware/bringup/tmp/
chmod 644 /opt/vmware/bringup/tmp/all.json
chown vcf_bringup:vcf /opt/vmware/bringup/tmp/all.jsonhttps://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725143229875-1905578161.png
三、NSX Manager 部署本领
1)增长 NSX Manager 部署的等候时间
VCF 管理域部署期间,在主动部署和设置 NSX 组件的时候花费的时间最长,如果部署环境的硬件性能欠好,可能会持续很长时间,最后以致会失败。可以调整 Cloud Builder 部署 NSX 组件的等候时间,这样也能在超时之前完成部署过程。SSH 以 admin 用户登录到 Cloud Builder 并切换到 root 用户,执行以下下令。
vim /opt/vmware/bringup/webapps/bringup-app/conf/application.properties增长下面参数:
nsxt.manager.wait.minutes=100 (或者更长)https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725142632504-296693530.png
重启 Cloud Builder 服务。
systemctl restart vcf-bringup2)修改 NSX Manager 部署节点的数量
默认环境下,部署 NSX Manager 组件的时候会部署 3 个 NSX Manager 节点并设置完备的 NSX 集群。其实,如果只是测试学习,当部署 VCF 环境的宿主机的资源不是很充足的环境下,可以只部署 1 个 NSX Manager 节点,这样还可以大大降低资源的占用。
通过将 Excel 参数表转换成 JSON 设置文件,然后找到 JSON 文件中关于 NSX 的设置,如下所示。
"nsxtSpec":
{
"nsxtManagerSize": "medium",
"nsxtManagers": [
{
"hostname": "vcf-mgmt01-nsx01a",
"ip": "192.168.32.67"
},
{
"hostname": "vcf-mgmt01-nsx01b",
"ip": "192.168.32.68"
},
{
"hostname": "vcf-mgmt01-nsx01c",
"ip": "192.168.32.69"
}
],
"rootNsxtManagerPassword": "Vcf5@password",
"nsxtAdminPassword": "Vcf5@password",
"nsxtAuditPassword": "Vcf5@password",
"vip": "192.168.32.66",
"vipFqdn": "vcf-mgmt01-nsx01",将另外 2 个 NSX Manager 节点从 JSON 文件中删除,如下所示。这样你就可以只部署 1 个节点了。
"nsxtSpec":
{
"nsxtManagerSize": "medium",
"nsxtManagers": [
{
"hostname": "vcf-mgmt01-nsx01a",
"ip": "192.168.32.67"
}
],
"rootNsxtManagerPassword": "Vcf5@password",
"nsxtAdminPassword": "Vcf5@password",
"nsxtAuditPassword": "Vcf5@password",
"vip": "192.168.32.66",
"vipFqdn": "vcf-mgmt01-nsx01",3)调整 NSX Manager 默认存储策略
同样的原因,当硬件性能不敷时,可以通过调整 vSAN 默认的存储策略,将 FTT 修改为 0,也就是没有任务副本,这样在部署 NSX Manager 组件的时候也可以加快部署,等后续 VCF 管理域部署乐成之后,再将 NSX Manager 节点的 vSAN 存储策略调整为 vSAN ESA 默认的存储策略(RAID 5)。注意,这必要在 Cloud Builder 部署 NSX Manager 组件之前登录 vSphere Client 举行调整。
4)修改 NSX Manager 内存预留
同样的原因,当硬件资源不敷时,可以将 NSX Manager 节点虚拟机的内存设置中的内存预留修改为“0”,也就是不占用分配的全部内存资源。当然这个可根据必要在 VCF 管理域部署乐成之后登录 vSphere Client 举行修改。
四、准备 JSON 设置文件
1)Excel 参数表
下面是针对当前环境准备的 Excel 参数表,大家可以有个直观的了解。License 已经过处置惩罚。
[*]Credentials 参数表
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725134723482-1148725686.png
[*]Hosts and Networks 参数表
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725135104473-1489782702.png
[*]Deploy Parameters 参数表
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725135526998-1394581798.png
2)JSON 设置文件
后面将使用 JSON 格式的设置文件导入部署,只保存了 1 个 NSX Manager 节点。License 已经过处置惩罚。
{ "subscriptionLicensing": false,"skipEsxThumbprintValidation": false,"managementPoolName": "vcf-mgmt01-np01","sddcManagerSpec": { "secondUserCredentials": { "username": "vcf", "password": "Vcf5@password" }, "ipAddress": "192.168.32.70", "hostname": "vcf-mgmt01-sddc01", "rootUserCredentials": { "username": "root", "password": "Vcf5@password" }, "localUserPassword": "Vcf5@password"},"sddcId": "vcf-mgmt01","esxLicense": "00000-00000-00000-00000-00000","taskName": "workflowconfig/workflowspec-ems.json","ceipEnabled": false,"fipsEnabled": false,"ntpServers": ["192.168.32.3"],"dnsSpec": { "subdomain": "mulab.local", "domain": "mulab.local", "nameserver": "192.168.32.3"},"networkSpecs": [ { "networkType": "MANAGEMENT", "subnet": "192.168.32.0/24", "gateway": "192.168.32.254", "vlanId": "0", "mtu": "1500", "portGroupKey": "vcf-mgmt01-vds01-pg-mgmt", "standbyUplinks":[], "activeUplinks":[ "uplink1", "uplink2" ] }, { "networkType": "VMOTION", "subnet": "192.168.40.0/24", "gateway": "192.168.40.254", "vlanId": "40", "mtu": "9000", "portGroupKey": "vcf-mgmt01-vds01-pg-vmotion", "includeIpAddressRanges": [{"endIpAddress": "192.168.40.4", "startIpAddress": "192.168.40.1"}], "standbyUplinks":[], "activeUplinks":[ "uplink1", "uplink2" ] }, { "networkType": "VSAN", "subnet": "192.168.41.0/24", "gateway": "192.168.41.254", "vlanId": "41", "mtu": "9000", "portGroupKey": "vcf-mgmt01-vds02-pg-vsan", "includeIpAddressRanges": [{"endIpAddress": "192.168.41.4", "startIpAddress": "192.168.41.1"}], "standbyUplinks":[], "activeUplinks":[ "uplink1", "uplink2" ] }, { "networkType": "VM_MANAGEMENT", "subnet": "192.168.32.0/24", "gateway": "192.168.32.254", "vlanId": "0", "mtu": "9000", "portGroupKey": "vcf-mgmt01-vds01-pg-vm-mgmt", "standbyUplinks":[], "activeUplinks":[ "uplink1", "uplink2" ] }],"nsxtSpec":
{
"nsxtManagerSize": "medium",
"nsxtManagers": [
{
"hostname": "vcf-mgmt01-nsx01a",
"ip": "192.168.32.67"
}
],
"rootNsxtManagerPassword": "Vcf5@password",
"nsxtAdminPassword": "Vcf5@password",
"nsxtAuditPassword": "Vcf5@password",
"vip": "192.168.32.66",
"vipFqdn": "vcf-mgmt01-nsx01", "nsxtLicense": "33333-33333-33333-33333-33333", "transportVlanId": 42, "ipAddressPoolSpec": { "name": "vcf-mgmt01-tep01", "description": "ESXi Host Overlay TEP IP Pool", "subnets":[ { "ipAddressPoolRanges":[ { "start": "192.168.42.1", "end": "192.168.42.8" } ], "cidr": "192.168.42.0/24", "gateway": "192.168.42.254" } ] }},"vsanSpec": { "licenseFile": "11111-11111-11111-11111-11111", "vsanDedup": "false", "esaConfig": { "enabled": true }, "hclFile": "/opt/vmware/bringup/tmp/all.json", "datastoreName": "vcf-mgmt01-vsan-esa-datastore01"},"dvsSpecs": [ { "dvsName": "vcf-mgmt01-vds01", "vmnics": [ "vmnic0", "vmnic1" ], "mtu": 9000, "networks":[ "MANAGEMENT", "VMOTION", "VM_MANAGEMENT" ], "niocSpecs":[ { "trafficType":"VSAN", "value":"HIGH" }, { "trafficType":"VMOTION", "value":"LOW" }, { "trafficType":"VDP", "value":"LOW" }, { "trafficType":"VIRTUALMACHINE", "value":"HIGH" }, { "trafficType":"MANAGEMENT", "value":"NORMAL" }, { "trafficType":"NFS", "value":"LOW" }, { "trafficType":"HBR", "value":"LOW" }, { "trafficType":"FAULTTOLERANCE", "value":"LOW" }, { "trafficType":"ISCSI", "value":"LOW" } ], "nsxtSwitchConfig": { "transportZones": [ { "name": "vcf-mgmt01-tz-vlan01", "transportType": "VLAN" } ] } }, { "dvsName": "vcf-mgmt01-vds02", "vmnics": [ "vmnic2", "vmnic3" ], "mtu": 9000, "networks":[ "VSAN" ], "nsxtSwitchConfig": { "transportZones": [ { "name": "vcf-mgmt01-tz-overlay01", "transportType": "OVERLAY" }, { "name": "vcf-mgmt01-tz-vlan02", "transportType": "VLAN" } ] } }],"clusterSpec":{ "clusterName": "vcf-mgmt01-cluster01", "clusterEvcMode": "intel-broadwell", "clusterImageEnabled": true, "vmFolders": { "MANAGEMENT": "vcf-mgmt01-fd-mgmt", "NETWORKING": "vcf-mgmt01-fd-nsx", "EDGENODES": "vcf-mgmt01-fd-edge" }},"pscSpecs": [ { "adminUserSsoPassword": "Vcf5@password", "pscSsoSpec": { "ssoDomain": "vsphere.local" } }],"vcenterSpec": { "vcenterIp": "192.168.32.65", "vcenterHostname": "vcf-mgmt01-vcsa01", "licenseFile": "22222-22222-22222-22222-22222", "vmSize": "small", "storageSize": "", "rootVcenterPassword": "Vcf5@password"},"hostSpecs": [ { "association": "vcf-mgmt01-datacenter01", "ipAddressPrivate": { "ipAddress": "192.168.32.61" }, "hostname": "vcf-mgmt01-esxi01", "credentials": { "username": "root", "password": "Vcf5@password" }, "sshThumbprint": "SHA256:PYxgi8oEfK3j263pHx3InwL1xjIY1rAYN6pR607NWjc", "sslThumbprint": "FF:A2:88:5B:C3:9A:A0:14:CE:ED:6D:F7:CE:5C:55:B6:2B:6D:35:E8:60:AE:79:79:FD:A3:A7:6C:D7:C1:5C:FA", "vSwitch": "vSwitch0" }, { "association": "vcf-mgmt01-datacenter01", "ipAddressPrivate": { "ipAddress": "192.168.32.62" }, "hostname": "vcf-mgmt01-esxi02", "credentials": { "username": "root", "password": "Vcf5@password" }, "sshThumbprint": "SHA256:h6HfTvQi/HJxFq48Q4SQH1TevWqNvgEQ1kWARQwpjKw", "sslThumbprint": "70:1A:62:4F:B6:A9:A2:E2:AC:6E:4D:28:DE:E5:A8:FE:B1:F3:B0:A0:3F:26:93:86:F1:66:B3:A6:44:50:1F:AE", "vSwitch": "vSwitch0" }, { "association": "vcf-mgmt01-datacenter01", "ipAddressPrivate": { "ipAddress": "192.168.32.63" }, "hostname": "vcf-mgmt01-esxi03", "credentials": { "username": "root", "password": "Vcf5@password" }, "sshThumbprint": "SHA256:rniXpvC4JmiXVq7nd+FkjMrX+oTKCM+CgkvglKATgEE", "sslThumbprint": "76:84:9E:03:BB:C5:10:FE:72:FC:D3:24:84:71:F5:85:7B:A7:0B:55:7C:7B:0F:BB:83:EA:D7:4F:66:3E:B1:8D", "vSwitch": "vSwitch0" }, { "association": "vcf-mgmt01-datacenter01", "ipAddressPrivate": { "ipAddress": "192.168.32.64" }, "hostname": "vcf-mgmt01-esxi04", "credentials": { "username": "root", "password": "Vcf5@password" }, "sshThumbprint": "SHA256:b5tRZdaKBbMUGmXPAph5s6XdMKQ5Mh0pjzgM0A16J/g", "sslThumbprint": "97:83:39:DE:C0:D3:99:06:49:FF:1C:E8:BA:76:60:C6:C1:45:19:BD:C9:10:B0:C2:58:AC:71:12:C8:21:A9:BF", "vSwitch": "vSwitch0" }]}五、部署 SDDC 管理域
在准备以上所有环境后,如今正式进入 SDDC 管理域的部署。通过跳板机访问到 Cloud Builder 并完成登录。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725145700100-1502829602.png
选择 VMware Cloud Foundation 平台。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725145719618-1800212082.png
确定担当,点击 NEXT。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725145728996-1133609528.png
已准备好参数设置文件,点击 NEXT。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725145737130-1925294818.png
上传 JSON 设置文件,点击 NEXT。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725145747401-465026073.png
完成设置文件检查,点击 NEXT。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725145755646-903366258.png
点击确定部署 SDDC。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725145805619-806146952.png
开始 SDDC Bringup 构建过程。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725145817909-1362123109.png
可以去吃个饭喝杯咖啡,然后完成部署。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725173955875-1750225297.png
部署过程的全部任务(之前截图)。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240723121524043-1605827455.gif
DOWNLOAD 部署陈诉,用了 2 小时。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174014976-938840373.png
点击 FINISH,访问 SDDC Manager。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174028965-841194428.png
跳转到 vCenter Server 并输入暗码登录。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174050211-1737398616.png
查看 VMware Cloud Foundation 版本。
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174106638-1239330832.png
六、SDDC 管理域相关信息
1)SDDC Manager
[*]SDDC Manager 仪表盘
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174253090-498866973.png
[*]SDDC Manager 清单中所有工作负载域
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174305484-164574658.png
[*]vcf-mgmt01 管理工作负载域摘要
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174320427-2033586050.png
[*]vcf-mgmt01 管理工作负载域中的主机
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174337612-784339197.png
[*]vcf-mgmt01 管理工作负载域中的集群
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174401551-736129950.png
[*]vcf-mgmt01 管理工作负载域组件证书
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174415777-812554335.png
[*]SDDC Manager 清单中的所有主机
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174428903-1275116175.png
[*]SDDC Manager 中所包含的发行版本
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174440068-1357156297.png
[*]SDDC Manager 中所创建的网络池
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174456080-16015184.png
[*]SDDC Manager 设置备份
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174507245-618789021.png
[*]SDDC Manager 中组件暗码管理
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174520795-1239431326.png
2)NSX Manager
[*]NSX 系统设置概览
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174555754-258009088.png
[*]NSX 节点设备
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174607857-744993057.png
[*]NSX 传输节点
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174619007-861650395.png
[*]NSX 设置文件
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174630291-1227927216.png
[*]NSX 传输区域
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174639397-791854496.png
[*]NSX 设置备份
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174650195-149465984.png
3)vCenter Server
[*]VCF 管理域的主机和集群
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174750331-615817152.png
[*]VCF 管理域 vSAN ESA 存储架构
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174806209-1286869885.png
[*]VCF 管理域相关组件虚拟机
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174822872-996748341.png
[*]VCF 管理域所使用的 vSAN 存储
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174838229-2143149753.png
[*]VCF 管理域的分布式交换机设置
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174847657-1677720660.png
[*]VCF 管理域 ESXi 主机的网络设置
https://img2024.cnblogs.com/blog/2313726/202407/2313726-20240725174902831-1754551159.png
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!更多信息从访问主页:qidao123.com:ToB企服之家,中国第一个企服评测及商务社交产业平台。
页:
[1]