Wednesday, October 11, 2017

Linux: How to display PCI devices topology


[user@hostname ~]# lspci -tv
-+-[0007:80]-+-00.0  Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe
 |           +-00.1  Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe
 |           +-00.2  Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe
 |           \-00.3  Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe
 +-[0005:50]---00.0  Texas Instruments TUSB73x0 SuperSpeed USB 3.0 xHCI Host Controller
 +-[0004:01]---00.0  Mellanox Technologies MT27600 [Connect-IB]
 +-[0003:70]---00.0  IBM PCI-E IPR SAS Adapter (ASIC)
 +-[0000:01]---00.0  Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
 \-[0000:00]-


Thursday, November 24, 2016

Brocade SAN incoming SSH authentication not working - ssh login without password: wrong permissions on authorized_keys file

I have faced issue in FOS v7.4.1d that incoming SSH authentication (ssh login without password) did not work, even I have configured regarding to Brocade manual:
http://www.brocade.com/content/html/en/administration-guide/fos-741-adminguide/GUID-6FE6380F-52C8-4C5F-A69E-23EE7DB57E65.html 

The root cause of problem was in permissions on authorized_keys on Brocade SAN SW:

Login as root user:
ssh root@SANSW

Change directory to: /fabos/users/admin/.ssh
SANSW:root> cd /fabos/users/admin/.ssh

and list permisisons
SANSW:root> ls -la
total 32
drwxr-xr-x   2 root     admin        4096 Nov 13 17:27 ./
drwxr-xr-x   3 root     admin        4096 Sep  8 17:30 ../
-rw-r--r--   1 root     admin       10240 Nov 13 17:27 authorizedKeys.tar
-rw-------   1 root     admin         392 Nov 13 17:27 authorized_keys
-rw-------   1 root     admin         392 Nov 13 17:27 authorized_keys.admin
-rw-r--r--   1 root     admin         134 Jul 15 01:09 environment

Change permissions of authorized_keys.admin file for user admin:
SANSW:root> chmod g+r,a+r authorized_keys.admin
SANSW:root> ls -la
total 32
drwxr-xr-x   2 root     admin        4096 Nov 13 17:27 ./
drwxr-xr-x   3 root     admin        4096 Sep  8 17:30 ../
-rw-r--r--   1 root     admin       10240 Nov 13 17:27 authorizedKeys.tar
-rw-------   1 root     admin         392 Nov 13 17:27 authorized_keys
-rw-r--r--   1 root     admin         392 Nov 13 17:27 authorized_keys.admin
-rw-r--r--   1 root     admin         134 Jul 15 01:09 environment




Thursday, November 19, 2015

AIX: How to synchronize more PP/LP in parallel by LVM NUM_PARALLEL_LPS

Command syncvg can be customized to synchronize different number of LPs in parallel
syncvg -p <#>

If you use mirrorvg you can set environment variable NUM_PARALLEL_LPS before running command:

export NUM_PARALLEL_LPS=<#>

Value <#> can be number in range of 1-32

Monday, July 20, 2015

AIX: devscan reporting and diagnostic tool for SAN devices

Are you looking for reporting and diagnostic tool for SAN devices on AIX?

Lets try IBM support tool called devscan which can report for each LUN:

  • ODM name and status
  • PVID, if there is one
  • Device type
  • Capacity and block size
  • SCSI status
  • Reservation status, both SCSI-2 and SCSI-3
  • ALUA status
  • Time to service a SCSI Read

Source and download:
https://www-304.ibm.com/support/docview.wss?uid=aixtoolsc9e095f

Wednesday, April 1, 2015

AIX: How to get CPU / processor socket and core count for IBM Power server

Here is the command for AIX which shows you the count of processor cards (sockets) installed in your Power server including information about count of cores per one socket. The terminology processor in IBM Power server and AIX world describes core in relation to HW.

Command:
lscfg -vp|grep WAY

Example:
lscfg -vp|grep WAY
      6-WAY  PROC CUOD:
      6-WAY  PROC CUOD: 


Expample shows you that system has two sockets and each has 6 cores, so system has 12 cores.


Friday, February 20, 2015

AIX: How to determine / identify volume UID from Storwize / SVC

How to list / identify volume ID UID from Storwize / SVC?



This command can be used to determine / list volume UID of disk located on Storwize / SVC in AIX:

for i in `lsdev -t 2145 | awk '{ print $1}'`
do
echo $i "\c"
lsattr -El $i -a unique_id | awk '{ print $2 }' | cut -c 6-37
done


or for MPIO without HAS and SDDPCM:

for i in `lsdev -t mpioosdisk | awk '{ print $1}'`
do
echo $i "\c"
lsattr -El $i -a unique_id | awk '{ print $2 }' | cut -c 6-37
done

You get volume UID which can be used to identify disk on Storwize / SVC in GUI or running command lsvdisk.

lsvdisk -filtervalue vdisk_UID=60050111111111111000000000000002


Enjoy IT!

Thursday, June 12, 2014

PowerVM: network outage during fallback in SEA load sharing environment


Problem summary

  • In a dual VIOS setup where SEA is configured in load sharing
    mode and backup SEA is created over etherchannel in IEEE802.3d
    mode. Rebooting backup VIOS will result in VIO client
    communication loss for  30 seconds.

Friday, March 28, 2014

TSM: Why are backups going to tape instead of disk

Technote (troubleshooting)

Problem(Abstract)

Tivoli Storage Manager TSM Client backups and/or archives may go to a tape pool although the management class used sends data to a disk pool

Cause

DISK pool is not available or does not have free space or data size exceeds threshold.

Resolving the problem

Wednesday, October 9, 2013

AIX: How to list disk size using getconf

Disk size in AIX can be listed by this command:

for i in `lsdev -Cc disk | awk '{print $1}'`
do
echo "$i \c"
getconf DISK_SIZE /dev/$i
done

Tuesday, September 10, 2013

TSM: Basic - Extended Edition: drive and storage slots limit


For IBM Tivoli Storage Manager version 5.4 and later, to use a library with greater than 4 drives or greater than 48 storage slots, IBM Tivoli Storage Manager Extended Edition is required.

Friday, September 6, 2013

AIX: migratepv 0516-404 allocp: This system cannot fulfill the allocation request

I have faced issue with migratepv command.
I wanted to migrate PPs from hdisk3 to hdisk2 in same scalable volume group.
Command has failed even there were enough PPs on hdisk2.


migratepv hdisk3 hdisk2
0516-404 allocp: This system cannot fulfill the allocation request.
        There are not enough free partitions or not enough physical volumes
        to keep strictness and satisfy allocation requests.  The command
        should be retried with different allocation characteristics.
0516-132 lmigratelv: Incorrect entry in partition map file.
0516-134 lmigratelv: Received: 00f73e7044ccd502 110

0516-136 lmigratelv: Error reading input map.
0516-812 migratepv: Warning, migratepv did not completely succeed;
        all physical partitions have not been moved off the PV.

I had to run reorgvg to fix this issue.
It seems to be a bug in LVM because I can't see any reason of described behavior.