Qemu启动之后使用默认的Administrator可以ssh登录,但是web登录失败

问题描述

使用的Ubuntu 24.04和25.12 LTS版本manifest搭建环境,构建仿真包之后一键运行启动qemu,使用 Administrator/Admin@90000 账户可以正常通过ssh登录,之后打开web页面也正常显示,但是使用同样的账户无法登录。后续尝试使用ipmc创建新用户,但是会出现异常导致配置新用户权限失败。

环境信息

  • 操作系统:Ubuntu 24.04

  • 软件版本:OpenUBMC 25.12-LTS-SP1

重现步骤

  1. 启动qemu:python3 build/works/packet/qemu_shells/vemake_1711.py

  2. 等待一段时间后通过ssh登录账户 Administrator/Admin@90000,成功登录

  3. 打开web页面,同样使用上述账户,登录失败

  4. 尝试使用ipmc创建新用户,并赋予管理员权限,报错

期望结果

正常创建新用户,并可以web登录

实际结果

ipmc与security.log报错信息:

opt/bmc/skynet/lua: ./opt/bmc/apps/cli/lualib/config.lua:72: bad argument #2 to ‘get_product_info’ (number expected, got string)
stack traceback:
[C]: in f/unction ‘get_product_info’
./opt/bmc/apps/cli/lualib/config.lua:72: in function ‘get_interface_config_path’
./opt/bmc/apps/cli/lualib/config.lua:82: in main chunk
[C]: in function ‘require’
./opt/bmc/apps/cli/lualib/command/base.lua:21: in main chunk
[C]: in function ‘require’./opt/bmc/apps/cli/lualib/command/ordinary.lua:16: in main chunk
[C]: in function ‘require’./opt/bmc/apps/cli/lualib/command/command.lua:16: in main chunk
[C]: in function ‘require’./opt/bmc/apps/cli/lualib/ipmc.lua:15: in main chunk
[C]: in function ‘require’
./opt/bmc/apps/cli/service/ipmcset.lua:31: in main chunk
[C]: at 0xaaaad69ac9f8
/var/log # cat security.log 
2023-08-15T09:21:16.630177+00:00 iBMC sshd[4166]: Server listening on ::1 port 22.
2023-08-15T09:21:16.635025+00:00 iBMC sshd[4166]: Server listening on ::2 port 22.
2023-08-15T09:21:16.647907+00:00 iBMC sshd[4166]: Server listening on 127.0.0.1 port 22.
2023-08-15T09:23:41.347419+00:00 iBMC sshd[5069]: Connection closed by 10.0.2.2 port 53276 [preauth]
2023-08-15T09:20:47.735450+00:00 iBMC sshd[5283]: Accepted password for Administrator from 10.0.2.2 port 33770 ssh2
2023-08-15T09:20:47.818160+00:00 iBMC sshd[5283]: pam_unix(sshd:session): session opened for user Administrator(uid=502) by (uid=0)
2023-08-15T09:20:52.237622+00:00 iBMC (syst: pam_warn(systemd-user:setcred): function=[pam_sm_setcred] flags=0x8002 service=[systemd-user] terminal=[<unknown>] user=[Administrator] ruser=[<unknown>] rhost=[<unknown>]
2023-08-15T09:21:00.757712+00:00 iBMC sshd[5450]: There were 1 logins (max 10), current session is Administrator from 10.0.2.2.
2023-08-15T09:20:58.876480+00:00 iBMC security: User admin(10.0.2.2) login failed
2023-08-15T09:21:02.198632+00:00 iBMC security: User Administrator(10.0.2.2) login failed

尝试过的解决方案

出现上述报错之后,需要删除 manifest/output/data 目录,重新执行 python3 build/works/packet/qemu_shells/vemake_1711.py,才能再次正常运行qemu

答案内容由AI生成,仅供参考:

:pushpin: 总结与建议

问题 根本原因 建议方案
Web 登录失败 Nginx 启动慢 / SSL 证书异常 / IAM 服务异常 检查 app.logsecurity.log,等待服务启动,必要时 killall interface
ipmcset 权限配置崩溃 config.lua 类型传参错误(string → number) 暂勿使用 ipmcset 设置权限;建议使用 Web UI 或 mdbctl 替代
环境不稳定 output/data 异常引起配置污染 构建前固定清理此目录
点击此处查看详细分析解答

QEMU 启动后 Web 登录失败及用户权限配置异常问题分析与解决方案

根据您提供的详细环境信息、操作步骤和日志,本文将系统性地分析 OpenUBMC QEMU 仿真环境中 Web 登录失败ipmcset 权限配置报错 的根本原因,并提供针对性的解决方案。


:magnifying_glass_tilted_left: 问题概要

项目 内容
环境 Ubuntu 24.04 / 25.12 LTS,OpenUBMC 25.12-LTS-SP1
QEMU 启动命令 python3 build/works/packet/qemu_shells/vemake_1711.py
SSH 登录 :white_check_mark: 成功(账户:Administrator/Admin@90000
Web 登录 :cross_mark: 失败(相同账户无法登录)
用户创建与授权 :cross_mark: 报错:bad argument #2 to 'get_product_info' (number expected, got string)

:puzzle_piece: 问题一:Web 登录失败

:white_check_mark: 现象确认

  • SSH 可正常登录,说明系统服务基本运行正常。
  • Web 页面正常加载(https://localhost:10443),但登录提示“登录失败”。
  • security.log 显示用户登录失败记录:
    User Administrator(10.0.2.2) login failed
    

:pushpin: 可能原因分析

  1. Nginx 或 Web 服务未完全启动

    • 虽然页面能打开,但后端认证服务可能未准备好。
    • 参考文档Qemu相关FAQ 提到“web连接不通可能在于nginx还没拉起”,需等待几秒或检查服务状态。
  2. SSL 证书加载失败导致 Nginx 异常

    • 参考文档(《openUBMC300 V3问题定位分享》)指出:

      “目前遇到很多web打不开的问题,主要是ssl证书获取失败,导致Nginx未加载。”

    • OpenUBMC Web 使用 HTTPS,若证书未正确生成或加载,可能导致接口异常、登录失败。
  3. IAM(用户认证模块)服务异常或依赖缺失

    • security.log 中出现 pam_unix 会话打开成功,但仍有“login failed”,表明 PAM 认证通过但后续 IAM 模块处理失败
    • 可能源于 config.luaget_product_info 调用异常(与问题二关联)。
  4. 数据库或配置文件损坏

    • manifest/output/data 被异常修改,可能导致用户配置丢失或不一致(您已尝试删除该目录并重建,说明怀疑点一致)。

:wrench: 解决方案建议

1. 检查 Web 后端服务状态(Nginx、IAM)

# 登录 SSH 后执行
netstat -ntlpa | grep :10443        # 查看 10443 端口是否监听
ps aux | grep nginx                # 查看 nginx 是否运行
ps aux | grep iam                  # 查看 iam 服务是否启动

2. 检查 SSL 证书是否存在且有效

ls /etc/ssl/certs/                 # 检查证书目录
ls /data/cert/                     # OpenUBMC 常用证书路径

若证书缺失或为空,可能导致 Nginx 无法正常加载安全模块,进而拒绝 Web 登录。

3. 查看关键日志文件定位问题根源

# 查看安全日志
cat /var/log/security.log | grep "login failed"

# 查看应用日志(最可能记录错误)
cat /var/log/app.log | grep -i "login\|auth\|iam"

4. 尝试重启 Web 接口服务

killall interface    # 重启 Web 前端代理进程(常见于 OpenUBMC)
# 等待几秒后再次尝试 Web 登录

提示:这是《Qemu相关FAQ》中推荐的恢复方法之一。


:warning: 问题二:ipmcset 配置用户权限时报错

:cross_mark: 错误日志解析

/opt/bmc/skynet/lua: ./opt/bmc/apps/cli/lualib/config.lua:72: 
bad argument #2 to 'get_product_info' (number expected, got string)

该错误发生在 config.lua 的以下调用链中:

get_product_info → get_interface_config_path → require(base.lua) → ipmcset

说明 get_product_info() 函数期望接收一个 数字参数,但实际传入了 字符串(如 “admin”)

根本原因ipmcset 在设置用户权限时,尝试通过产品信息获取接口配置路径,但传参类型错误,导致 Lua 运行时崩溃。


:books: 背景知识:ipmcset 工作机制

  • ipmcset -d adduser -v admin:创建用户名为 admin 的用户 :white_check_mark:
  • ipmcset -d privilege -v admin 4:为用户 admin 设置权限等级 4(ADMINISTRATOR)

但第二个命令在调用 get_interface_config_path() 时,错误地将 "admin"(字符串)作为产品 ID(应为数字)传给了 get_product_info()


:man_detective: 根本原因推断

  1. 固件版本 Bug:OpenUBMC 25.12-LTS-SP1 中 ipmcset CLI 工具存在 类型校验缺失或传参错误
  2. 配置脚本缺陷config.lua 未对输入参数做安全检查,直接传递字符串给需要数字的函数。
  3. manifest 构建异常output/data 目录异常可能导致配置模板损坏,使命令行为异常。

佐证:KG 数据显示 openUBMC 25.09 Version 曾修改 ipmitool user priv 行为引入更严格的通道要求,说明此类权限系统易出问题。


:wrench: 解决方案建议

:check_mark: 方法一:使用 mdbctl 替代 ipmcset(推荐)

避免使用存在 Bug 的 ipmcset,改用更底层的 mdbctl 进行用户管理(如果支持):

# 查询用户列表(验证)
mdbctl
> attach user_mgmt
> call getUserList {}

# 手动添加用户(需了解 DBus 接口)

提示:可参考 bmcgo gen -r openubmc_dev 生成的接口定义查找相关服务。

:check_mark: 方法二:检查并修复 config.lua 脚本逻辑(开发级修复)

定位文件:

./opt/bmc/apps/cli/lualib/config.lua

查找第 72 行附近代码:

local function get_interface_config_path(product_id)
    local info = get_product_info(product_id)  -- 此处 product_id 应为 number
    ...
end

问题出在未对 product_id 做类型判断。建议在调用前添加校验:

if type(product_id) ~= "number" then
    log.error("Invalid product_id type: " .. type(product_id))
    return nil
end

此修复需提交至 OpenUBMC 社区 PR。

:check_mark: 方法三:回退至已知稳定版本或 SP 补丁包

  • 当前使用的是 25.12-LTS-SP1,可尝试切换为更稳定的 25.0625.09 构建。
  • 查询 OpenUBMC 论坛 是否有同版本已知 Bug 报告。

:white_check_mark: 综合解决流程(操作建议)

# 1. 停止当前 QEMU 实例
killall qemu-system-aarch64    # 或根据进程名终止

# 2. 清理 output/data(已验证有效)
rm -rf manifest/output/data

# 3. 重新构建 QEMU 镜像
python3 build/works/packet/qemu_shells/vemake_1711.py

# 4. 启动后等待 1~2 分钟,确保 nginx、iam 完全拉起

# 5. SSH 登录检查服务
ssh Administrator@localhost -p 10022
netstat -ntlpa | grep :10443
ps aux | grep nginx
cat /var/log/app.log | grep -i error

# 6. 尝试 Web 登录(避免立即登录)

# 7. 若需创建用户,优先测试以下方式:
ipmcget -d userlist                            # 查看现有用户
ipmcset -d adduser -v testuser                 # 创建用户(密码交互)
# ❌ 避免使用 ipmcset -d privilege,改为:
#    - 使用 Web UI 添加权限(若可用)
#    - 或使用 mdbctl 调用底层接口

:pushpin: 总结与建议

问题 根本原因 建议方案
Web 登录失败 Nginx 启动慢 / SSL 证书异常 / IAM 服务异常 检查 app.logsecurity.log,等待服务启动,必要时 killall interface
ipmcset 权限配置崩溃 config.lua 类型传参错误(string → number) 暂勿使用 ipmcset 设置权限;建议使用 Web UI 或 mdbctl 替代
环境不稳定 output/data 异常引起配置污染 构建前固定清理此目录

:light_bulb: 建议向社区反馈

由于此问题涉及 CLI 工具严重 Bug(类型错误导致崩溃),建议在 OpenUBMC 论坛 提交 Issue:

标题[BUG] ipmcset -d privilege 触发 Lua 类型错误:get_product_info 接收 string 但期望 number

描述:在 OpenUBMC 25.12-LTS-SP1 中使用 ipmcset 设置用户权限时,因 config.lua 未校验参数类型,导致 get_product_info 报错,影响用户管理功能。建议增加类型判断或默认转换。


:white_check_mark: 最终目标达成路径

  1. 清理环境 → 2. 重建镜像 → 3. 等待服务就绪 → 4. 使用 Web UI 创建用户并分配权限(避开 CLI Bug)→ 5. 实现正常 Web 登录。

如仍有问题,可提供 /var/log/app.log 最新片段进一步分析。

web登录失败时 /var/log/app.log 的日志信息:

~ # cat /var/log/app.log | grep -i “login|auth|iam”
1970-01-01 00:01:42.942657 iam NOTICE: micro_component.lua(167): Startup status has changed, ==> Starting, uptime:103s, cost 0ms
1970-01-01 00:01:47.706367 iam NOTICE: persist_client_lib.lua(60): persist client init completed, time taken: 3960 ms
1970-01-01 00:01:53.334308 iam WARNING: client_app_base.lua(33): ping bmc.kepler.persistence /bmc/kepler/persistence failed 1 time, err: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include:the remote application did not send a reply, the messagebus security policy blocked the reply, the reply timeout expired,or the network connection was broken., retrying…
1970-01-01 00:01:58.837954 iam WARNING: client_app_base.lua(33): ping bmc.kepler.persistence /bmc/kepler/persistence failed 2 times, err: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include:the remote application did not send a reply, the messagebus security policy blocked the reply, the reply timeout expired,or the network connection was broken., retrying…
1970-01-01 00:02:04.345171 iam WARNING: client_app_base.lua(33): ping bmc.kepler.persistence /bmc/kepler/persistence failed 3 times, err: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include:the remote application did not send a reply, the messagebus security policy blocked the reply, the reply timeout expired,or the network connection was broken., retrying…
1970-01-01 00:02:15.160812 iam NOTICE: persist_client_lib.lua(60): persist client init completed, time taken: 26850 ms
1970-01-01 00:02:15.179387 iam NOTICE: iam_app.lua(143): iam app init start
1970-01-01 00:02:15.239092 iam NOTICE: service_app_base.lua(368): start iam service
1970-01-01 00:02:18.747835 iam NOTICE: mc_admin.lua(86): no service json info
1970-01-01 00:02:29.369217 iam ERROR: C: [WSEC_CBB][208] (UTC) 1970-01-01 00:02:29 KMC is running, not initialize repeatedly before finalized.
1970-01-01 00:02:32.659350 iam NOTICE: key_client_lib.lua(188): start listen on change of keyid from path: /bmc/kepler/KeyService/Kmc/Keys/10
1970-01-01 00:02:32.662682 iam NOTICE: key_client_lib.lua(107): Start Refresh MK Mask.
1970-01-01 00:02:32.754779 iam NOTICE: iam_app.lua(297): iam class init start
1970-01-01 00:02:37.246561 iam NOTICE: iam_app.lua(303): iam class init completed!
1970-01-01 00:02:37.341267 iam NOTICE: iam_app.lua(204): iam app init completed!
1970-01-01 00:02:37.362385 iam NOTICE: micro_component.lua(167): Startup status has changed, Starting ==> InitCompleted, uptime:158s, cost 54440ms
1970-01-01 00:02:47.004635 fructrl NOTICE: pwr_mutation.lua(79): [System:1]map event type = FruInsertionCriteriaMet
1970-01-01 00:02:47.023193 fructrl NOTICE: hotswap_state.lua(89): [System:1]Current event is FruInsertionCriteriaMet, state is M1, uptime: 168s.
1970-01-01 00:02:55.583628 iam NOTICE: object_manage.lua(678): start to fetch hwdiscovery objects
1970-01-01 00:02:56.192029 iam NOTICE: object_manage.lua(667): delay start, delay: 70ms
1970-01-01 00:02:56.202584 iam NOTICE: object_manage.lua(716): fetch hwdiscovery objects completely, took 0 ms, uptime: 177 s
1970-01-01 00:03:00.775690 iam ERROR: app_preloader.lua(232): …alib/account/interface/mdb/account_service_cache_mdb.lua:57: app(iam/service/main) count(1) pcall failed(./opt/bmc/libmc/lualib/mc/mdb/init.lua:848: try get object failed)
1970-01-01 00:03:04.972415 iam WARNING: init.lua(1129): Requestor Skynet message queue scheduling delay is 3645 ms (threshold: 500 ms), service_name=:1.80, path=/bmc/kepler/MdbService, interface=bmc.kepler.Mdb, method_name=GetObject
1970-01-01 00:03:17.252008 iam WARNING: init.lua(1129): Requestor Skynet message queue scheduling delay is 1276 ms (threshold: 500 ms), service_name=:1.80, path=/bmc/kepler/MdbService, interface=bmc.kepler.Mdb, method_name=GetObject
1970-01-01 00:03:18.939384 iam ERROR: app_preloader.lua(232): …alib/account/interface/mdb/account_service_cache_mdb.lua:57: app(iam/service/main) count(2) pcall failed(./opt/bmc/libmc/lualib/mc/mdb/init.lua:848: try get object failed)
1970-01-01 00:03:24.454179 account NOTICE: snmp_patch.lua(23): Because the authentication protocol of User 2 does not match then authentication key,the authentication protocol has been corrected.
1970-01-01 00:03:38.834463 iam ERROR: app_preloader.lua(232): …alib/account/interface/mdb/account_service_cache_mdb.lua:57: app(iam/service/main) count(3) pcall failed(./opt/bmc/libmc/lualib/mc/mdb/init.lua:848: try get object failed)
1970-01-01 00:03:43.274688 iam WARNING: init.lua(1129): Requestor Skynet message queue scheduling delay is 1542 ms (threshold: 500 ms), service_name=:1.80, path=/bmc/kepler/MdbService, interface=bmc.kepler.Mdb, method_name=GetObject
1970-01-01 00:03:51.180322 account NOTICE: account_app.lua(225): infrastructure init end, login rule init start
1970-01-01 00:03:52.312743 account NOTICE: account_app.lua(231): login rule init end, role privilege init start
1970-01-01 00:03:52.323177 iam WARNING: init.lua(1129): Requestor Skynet message queue scheduling delay is 765 ms (threshold: 500 ms), service_name=:1.80, path=/bmc/kepler/MdbService, interface=bmc.kepler.Mdb, method_name=GetObject
1970-01-01 00:04:02.218848 iam WARNING: init.lua(1129): Requestor Skynet message queue scheduling delay is 6386 ms (threshold: 500 ms), service_name=:1.80, path=/bmc/kepler/MdbService, interface=bmc.kepler.Mdb, method_name=GetObject
1970-01-01 00:04:06.968826 iam ERROR: app_preloader.lua(232): …alib/account/interface/mdb/account_service_cache_mdb.lua:57: app(iam/service/main) count(4) pcall failed(./opt/bmc/libmc/lualib/mc/mdb/init.lua:848: try get object failed)
1970-01-01 00:04:10.052302 nsm NOTICE: https_inter_chassis.lua(174): [nginx] Inter chassis auth enabled property changed, INTER_CHASSIS service enabled: false
1970-01-01 00:04:16.226987 iam WARNING: init.lua(1129): Requestor Skynet message queue scheduling delay is 3809 ms (threshold: 500 ms), service_name=:1.80, path=/bmc/kepler/MdbService, interface=bmc.kepler.Mdb, method_name=GetObject
1970-01-01 00:04:33.343034 iam NOTICE: account_cache_mdb.lua(79): receive add account signal, account23 added
1970-01-01 00:04:36.465104 iam NOTICE: account_cache_mdb.lua(79): receive add account signal, account20 added
1970-01-01 00:04:37.048010 iam NOTICE: account_cache_mdb.lua(79): receive add account signal, account19 added
1970-01-01 00:04:37.639591 iam NOTICE: account_cache_mdb.lua(79): receive add account signal, account2 added
1970-01-01 00:04:38.014052 iam NOTICE: account_cache_mdb.lua(79): receive add account signal, account18 added
2023-08-15 09:20:46.321820 iam NOTICE: account_cache_mdb.lua(79): receive add account signal, account22 added
2023-08-15 09:22:40.380872 iam NOTICE: key_client_lib.lua(181): Domain 10 update key_id to 2
2023-08-15 09:22:50.066665 iam NOTICE: start_profiling.lua(199): profiling finished, start time:1970-01-01 00:01:42, duration:5 min, sent signals:72, received signals:174, sent rpcs:43, received rpcs:1
2023-08-15 09:22:57.666604 iam ERROR: session_service.lua(86): get multi host manager_id(1) object failed
2023-08-15 09:22:57.671089 iam ERROR: session_service.lua(95): The retrieved object is null
2023-08-15 09:22:57.678824 iam ERROR: init.lua(97): session_mdb.lua:36 > session_mdb.lua:52 > session_service.lua:96: The request failed due to an internal service error. The service is still operational.
2023-08-15 09:22:57.693442 iam ERROR: session_service.lua(284): update cli online session error InternalError
2023-08-15 09:21:00.151883 web_backend NOTICE: route_mapper.lua(388): uri:/UI/Rest/Login, method:get timeout, t1=1692091256856, t2=1692091260137, time=3281
2023-08-15 09:21:05.485840 iam ERROR: session_service.lua(86): get multi host manager_id(1) object failed
2023-08-15 09:21:05.489068 iam ERROR: session_service.lua(95): The retrieved object is null
2023-08-15 09:21:05.495007 iam ERROR: init.lua(97): session_mdb.lua:36 > session_mdb.lua:52 > session_service.lua:96: The request failed due to an internal service error. The service is still operational.
2023-08-15 09:21:05.498627 iam ERROR: operation_logger.lua(620): NewSession: InternalError InternalError: The request failed due to an internal service error. The service is still operational.
2023-08-15 09:21:05.672047 web_backend NOTICE: route_mapper.lua(388): uri:/UI/Rest/Login, method:post timeout, t1=1692091258882, t2=1692091265662, time=6780

可以先试着删除output和temp目录,清理一下缓存再拉起qemu,拉起qemu的时候判断一下nignx是否正常启动

我尝试删除上述的两个目录,但是后续qemu起不来了,然后删除manifest目录重新初始化了,再次启动qemu还是同样的结果,nignx是正常启动的

~ # netstat -ntlpa
netstat: showing only processes with your user ID
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8208 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:80 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:8208 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:80 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:8210 0.0.0.0:* LISTEN 2531/om
tcp 0 0 127.0.0.1:40020 0.0.0.0:* LISTEN 2354/bmc_core
tcp 0 0 127.0.0.1:46100 0.0.0.0:* LISTEN 2531/om
tcp 0 0 127.0.0.1:40021 0.0.0.0:* LISTEN 2356/security
tcp 0 0 127.0.0.1:2198 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:2198 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:22 0.0.0.0:* LISTEN 3878/sshd_config [l
tcp 0 0 127.0.0.1:40023 0.0.0.0:* LISTEN 2350/alarm
tcp 0 0 127.0.0.1:2199 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:2199 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:40443 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:443 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:40443 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:443 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:40030 0.0.0.0:* LISTEN 2446/hardware
tcp 0 0 127.0.0.1:40031 0.0.0.0:* LISTEN 2480/om_priv
tcp 0 0 127.0.0.1:30080 0.0.0.0:* LISTEN 4556/interface
tcp 0 0 127.0.0.1:40032 0.0.0.0:* LISTEN 2418/energy
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 4401/nginx
tcp 0 0 127.0.0.1:40033 0.0.0.0:* LISTEN 2363/ras
tcp 0 0 127.0.0.1:30081 0.0.0.0:* LISTEN 4556/interface
tcp 0 0 127.0.0.1:40035 0.0.0.0:* LISTEN 2531/om
tcp 0 0 127.0.0.1:40010 0.0.0.0:* LISTEN 2159/framework
tcp 0 0 127.0.0.1:22 10.0.2.2:39702 ESTABLISHED 4531/sshd: Administ
tcp 0 0 127.0.0.1:35138 127.0.0.1:23 ESTABLISHED 5611/busybox_x
tcp 0 0 ::1:8208 :::* LISTEN 4401/nginx
tcp 0 0 ::1:8208 :::* LISTEN 4401/nginx
tcp 0 0 ::2:22 :::* LISTEN 3878/sshd_config [l
tcp 0 0 ::1:22 :::* LISTEN 3878/sshd_config [l
tcp 0 0 ::1:2198 :::* LISTEN 4401/nginx
tcp 0 0 ::1:2198 :::* LISTEN 4401/nginx
tcp 0 0 :::23 :::* LISTEN 2084/busybox_x
tcp 0 0 ::1:2199 :::* LISTEN 4401/nginx
tcp 0 0 ::1:2199 :::* LISTEN 4401/nginx
tcp 0 0 ::ffff:127.0.0.1:23 ::ffff:127.0.0.1:35138 ESTABLISHED 2084/busybox_x

$ curl -I http://localhost
HTTP/1.1 200 OK
Server: nginx/1.24.0 (Ubuntu)
Date: Mon, 02 Mar 2026 02:43:08 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Sat, 28 Feb 2026 01:39:13 GMT
Connection: keep-alive
ETag: “69a24741-267”
Accept-Ranges: bytes

web端登录时,返回500 Internal Server Error

HTTP/1.1 500 Internal Server Error
Server: WebServer/1.0.0
Date: Tue, 15 Aug 2023 09:21:05 GMT
Content-Type: application/json;charset=utf-8
Content-Length: 136
Connection: keep-alive
Set-Cookie: userdata: NULL
Token: userdata: NULL
OData-Version: 4.0
X-Frame-Options: DENY
X-Download-Options: noopen
X-XSS-Protection: 1;mode=block
X-Content-Type-Options: nosniff
X-Permitted-Cross-Domain-Policies: none
Strict-Transport-Security: max-age=31536000; includeSubDomains
Content-Security-Policy: default-src ‘self’; script-src ‘self’ ‘unsafe-eval’; connect-src ‘self’ wss://:; img-src ‘self’ data:; frame-src ‘self’; font-src ‘self’ data:; object-src ‘self’; style-src ‘self’; form-action ‘self’; frame-ancestors ‘self’; plugin-types ‘none’
Cache-Control: max-age=0, no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: 0
Referrer-Policy: no-referrer

看一下是不是没有做端口映射,curl 能访问的话,理论上web也是可以的

端口映射应该没问题,web端登陆时发送的GET请求是成功了的

不过web登录时 tail -f /opt/bmc/web/nginx/logs/error.log 有些报错信息产生,不清楚是否有影响

2023/08/15 09:20:50 [error] 4604#0: *2 open() "/opt/bmc/web/htdocs/extern/custom/style_config.json" failed (2: No such file or directory), client: 10.0.2.2, server: _, request: "GET /extern/custom/style_config.json?t=1772423287314 HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:50 [alert] 4604#0: *2 pread() read only 4096 of 19211 from "/opt/bmc/web/htdocs/favicon.ico" while sending response to client, client: 10.0.2.2, server: _, request: "GET /favicon.ico HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:50 [alert] 4603#0: *1 pread() read only 4096 of 19211 from "/opt/bmc/web/htdocs/favicon.ico" while sending response to client, client: 10.0.2.2, server: _, request: "GET /favicon.ico HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:50 [alert] 4603#0: *6 pread() read only 4096 of 19211 from "/opt/bmc/web/htdocs/favicon.ico" while sending response to client, client: 10.0.2.2, server: _, request: "GET /favicon.ico HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:50 [alert] 4603#0: *4 pread() read only 4096 of 19211 from "/opt/bmc/web/htdocs/favicon.ico" while sending response to client, client: 10.0.2.2, server: _, request: "GET /favicon.ico HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:50 [alert] 4603#0: *5 pread() read only 4096 of 19211 from "/opt/bmc/web/htdocs/favicon.ico" while sending response to client, client: 10.0.2.2, server: _, request: "GET /favicon.ico HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:51 [alert] 4604#0: *8 pread() read only 4096 of 19211 from "/opt/bmc/web/htdocs/favicon.ico" while sending response to client, client: 10.0.2.2, server: _, request: "GET /favicon.ico HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:51 [alert] 4604#0: *9 pread() read only 4096 of 19211 from "/opt/bmc/web/htdocs/favicon.ico" while sending response to client, client: 10.0.2.2, server: _, request: "GET /favicon.ico HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:51 [alert] 4604#0: *10 pread() read only 4096 of 19211 from "/opt/bmc/web/htdocs/favicon.ico" while sending response to client, client: 10.0.2.2, server: _, request: "GET /favicon.ico HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:51 [alert] 4604#0: *11 pread() read only 4096 of 19211 from "/opt/bmc/web/htdocs/favicon.ico" while sending response to client, client: 10.0.2.2, server: _, request: "GET /favicon.ico HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:51 [alert] 4603#0: *12 pread() read only 4096 of 19211 from "/opt/bmc/web/htdocs/favicon.ico" while sending response to client, client: 10.0.2.2, server: _, request: "GET /favicon.ico HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:52 [error] 4603#0: *3 open() "/opt/bmc/web/htdocs/extern/custom/style_config.json" failed (2: No such file or directory), client: 10.0.2.2, server: _, request: "GET /extern/custom/style_config.json?t=1772423289094 HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:52 [error] 4603#0: *14 open() "/opt/bmc/web/htdocs/extern/custom/login.png" failed (2: No such file or directory), client: 10.0.2.2, server: _, request: "GET /extern/custom/login.png HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:52 [error] 4604#0: *16 open() "/opt/bmc/web/htdocs/extern/custom/login_logo.png" failed (2: No such file or directory), client: 10.0.2.2, server: _, request: "GET /extern/custom/login_logo.png HTTP/1.1", host: "localhost:10443"
2023/08/15 09:20:51 [error] 4604#0: *21 connect() failed (111: Connection refused) while connecting to upstream, client: 10.0.2.2, server: _, request: "POST /UI/Rest/Login HTTP/1.1", upstream: "http://127.0.0.1:30081/UI/Rest/Login", host: "localhost:10443"
2023/08/15 09:21:04 [info] 4603#0: *27 client 10.0.2.2 closed keepalive connection

我重新安装下环境,修改了下使用的sdk版本,问题修复,web正常登录了

我也遇到了这个问题。

我拉取的最新的manifest代码,使用的latest(25.12)版本的sdk可以解决,https://repo.openubmc.cn/latest/sdk/bmc_sdk.zip

好的,我试试,我之前也是拉的最新manifest,main分支和LTS分支都尝试了。sdk是在https://repo.openubmc.cn/25.12/sdk/下载的