table of contents
SAPHanaSR_upgrade_to_angi(7) | SAPHanaSR | SAPHanaSR_upgrade_to_angi(7) |
NAME¶
SAPHanaSR_upgrade_to_angi - How to upgrade from SAPHanaSR or SAPHanaSR-ScaleOut to SAPHanaSR-angi.
DESCRIPTION¶
* What is the upgrade about?
SAPHanaSR-angi can be used to replace SAPHanaSR and SAPHanaSR-ScaleOut. SAPHanaSR-angi is quite similar to SAPHanaSR and SAPHanaSR-ScaleOut, but not fully backward compatible. Upgrading existing clusters is possible by following a defined procedure. The upgrade should lead to the same configuration as an installation from scratch.
The upgrade procedure depends on an initial setup as described in setup guides and manual pages. See REQUIREMENTS below and in manual pages SAPHanaSR(7) or SAPHanaSR-ScaleOut(7). The procedure does not necessarily need downtime for HANA, if planned and excuted carefully. Nevertheless, it should be done under friendly conditions.
* What will be changed for SAP HANA scale-up scenarios?
SAPHanaSR-angi unifies HA for HANA scale-up and scale-out. Therefor it handles scale-up as subset of scale-out, which changes the structure of attributes. The most significant changes are listed below.
a. The SAPHana RA and its multi-state config will be replaced by
the new SAPHanaController and its clone promotable config.
b. The SAPHanaSR.py HADR provider hook script will be replaced by the new
susHanaSR.py.
c. Tools are placed in /usr/bin/ instead of /usr/sbin/.
d. Node attributes will be removed.
hana_<sid>_remoteHost
lpa_<sid>_lpt
hana_<sid>_op_mode
hana_<sid>_srmode
hana_<sid>_sync_state
First and second field of hana_<sid>_roles
hana_<sid>_glob_prim
hana_<sid>_glob_sec
hana_<sid>_site_lpt_<site>
hana_<sid>_site_lss_<site>
hana_<sid>_site_mns_<site>
hana_<sid>_site_srr_<site>
hana_<sid>_site_opMode_<site>
hana_<sid>_site_srMode_<site>
hana_<sid>_site_srPoll_<site>
* What will be changed for SAP HANA scale-out scenarios?
SAPHanaSR-angi unifies HA for HANA scale-up and scale-out. The structure of attributes stays unchanged. The most significant changes are listed below.
a. The SAPHanaController RA and its multi-state config will be
replaced by the new SAPHanaController and its clone promotable config.
b. The SAPHanaSrMultiTarget.py HADR provider hook script will be replaced by
the new susHanaSR.py.
c. Tools are placed in /usr/bin/ instead of /usr/sbin/.
d. Node attributes will be removed.
hana_<sid>_gsh
hana_<sid>_glob_upd
hana_<sid>_glob_sync_state
hana_<sid>_glob_srmode
hana_<sid>_glob_srHook (in case of obsolete scale-out SAPHanaSR.py)
hana_<sid>_site_lpt_<site>
hana_<sid>_site_lss_<site>
hana_<sid>_site_mns_<site>
hana_<sid>_site_srr_<site>
hana_<sid>_site_srMode_<site>
hana_<sid>_site_srPoll_<site>
* How does the upgrade procedure look like at a glance?
The upgrade procedure consists of four phases: preparing, removing, adding, finalising. Linux cluster and HANA are kept running. However, resource management is disabled and the system goes thru fragile states during the upgrade.
1.2 Collect information, needed for upgrade
1.3 Make backup of CIB, sudoers and global.ini
2.1 Set SAPHana or SAPHanaController resource to maintenance
2.2 Remove SAPHanaSR.py or SAPHanaSrMultiTarget.py from global.ini, HANA and sudoers
2.3 Remove SAPHana or SAPHanaController resource config from CIB
2.4 Remove SAPHanaSR property attributes from CIB
2.5 Remove SAPHanaSR node attributes from CIB
2.6 Remove SAPHanaSR or SAPHanaSR-ScaleOut RPM
3.1 Install SAPHanaSR-angi RPM
3.2 Add susHanaSR.py to sudoers, global.ini, HANA
3.3 Add angi SAPHanaController resource config to CIB
3.4 Refresh SAPHanaController resource and set it out of maintenance
3.5 Add SAPHanaFilesystem resource (optional)
3.6 Add SAPHanaSR-alert-fencing agent (optional, scale-out)
4.1 Check for sane state of cluster, HANA and system replication
4.2 Test RA on secondary and trigger susHanaSR.py (optional)
4.3 Remove ad-hoc backup from local directories
* What needs to be prepared upfront?
First make yourself familiar with concepts, components and configuration of SAPHanaSR-angi. Refresh your knowledge of SAPHanaSR or SAPHanaSR-ScaleOut.
Next the following information needs to be collected and documented before upgrading a cluster:
1.2 Name of both cluster nodes, respectively both HANA master nameservers, see SAPHanaSR-showAttr(8)
1.3 HANA SID and instance number, name of <sid>adm
1.4 HANA virtual hostname, in case it is used
1.5 Name and config of existing SAPHana, or SAPHanaController, resources and related constraints in CIB, see ocf_suse_SAPHana(7) or ocf_suse_SAPHanaController(7)
1.6 Path to sudoers permission config file and its content, e.g. /etc/sudoers.d/SAPHanaSR
1.7 Name of existing SAPHanaSR.py, or SAPHanaSrMultiTarget.py, section in global.ini and its content, see SAPHanaSR.py(7), SAPHanaSrMultiTarget.py(7) and SAPHanaSR-manageProvider(8)
2.1 Name and config for new SAPHanaController resources and related constraints, path to config template, see ocf_suse_SAPHanaController(7)
2.2 Path to config template for new sudoers permission and its content, see susHanaSR.py(7)
2.3 Path to config template for new susHanaSR.py section, e.g. /usr/share/SAPHanaSR-angi/global.ini_susHanaSR, see susHanaSR.py(7)
2.4 Name and config for new SAPHanaFilesystem resources, path to config template , see ocf_suse_SAPHanaFilesystem(7) (optional)
Finally prepare the config templates with correct values for the given cluster. Ideally also the needed commands are prepared in detail.
EXAMPLES¶
* Example for checking sane state of cluster, HANA and system replication.
This steps should be performed before doing anything with the cluster, and after something has been done. Usually is done per Linux cluster. See also manual pages SAPHanaSR_maintenance_examples(7), cs_show_saphanasr_status(8) and section REQUIREMENTS below. For scale-out, SAPHanaSR-manageAttr(8) might be helpful as well.
# crm_mon -1r
# crm configure show | grep cli-
# SAPHanaSR-showAttr
# cs_clusterstate -i
* Example for showing SID and instance number of SAP HANA.
The installed SAP HANA instance is shown (should be only one) with its SID and instance number. For systemd-enabled HANA the same info can be fetched from systemd. Needs to be done at least once per Linux cluster. See also manual page SAPHanaSR_basic_cluster(7).
# systemd-cgls -u SAP.slice
* Example for collecting information on SAPHana resource config.
The names for SAPHana primitive and multi-state resource are determined, as well as for related oder and (co-)location constraints. The SAPHana primitive configuration is shown. Might be useful to see if there is anything special. Needs to be done once per Linux cluster.
# crm configure show |\
grep -e "[primitive|master|order|location].*SAPHana_"
# crm configure show rsc_SAPHana_HA1_HDB00
* Example for making a backup of CIB, sudo config and global.ini.
SID is HA1, sudo config is /etc/sudoers.d/SAPHanaSR.
# mkdir ~/$BAKDIR
# cp -a /hana/shared/HA1/global/hdb/custom/config/global.ini ~/$BAKDIR/
# cp -a /etc/sudoers.d/SAPHanaSR ~/$BAKDIR/SAPHanaSR.sudo
# crm configure show >~/$BAKDIR/crm_configure.txt
# ls -l ~/$BAKDIR/*
* Example for removing SAPHana resource config from CIB, scale-up.
First the CIB is written to file for backup. Next the cluster is told to not stop orphaned resources and the SAPHana multi-state resource is set into maintenance. Next the order and colocation constraints are removed, the SAPHana multi-state resource is removed and the orphaned primitive is refreshed. Then the cluster is told to stop orphaned resources. Finally the resulting cluster state is shown. Of course also the CIB should be checked to see if the removal was successful. Needs to be done once per Linux cluster. SID is HA1, Instance Number is 00. The resource names have been determined as shown in the example above. example above.
# echo "property cib-bootstrap-options: stop-orphan-resources=false"|\
crm configure load update -
# crm resource maintenance msl_SAPHana_HA1_HDB00 on
# cibadmin --delete --xpath \
"//rsc_order[@id='ord_SAPHana_HA1_HDB00']"
# cibadmin --delete --xpath \
"//rsc_colocation[@id='col_saphana_ip_HA1_HDB00']"
# cibadmin --delete --xpath \
"//master[@id='msl_SAPHana_HA1_HDB00']"
# crm resource refresh rsc_SAPHana_HA1_HDB00
# echo "property cib-bootstrap-options: stop-orphan-resources=true"|\
crm configure load update -
# crm_mon -1r
* Example for removing location constraints from CIB, scale-out.
First, the same steps as for scale-up have to be done, see example above. In addition the (anti-)location constraints for the majority maker node have to be removed. The resource names have been determined as shown in the example above.
"//rsc_location[@id='SAPHanaCon_not_on_majority_maker']"
# cibadmin --delete --xpath \
"//rsc_location[@id='SAPHanaTop_not_on_majority_maker']"
* Example for removing all reboot-safe node attributes from CIB.
All reboot-safe node attributes will be removed. Needed attributes are expected to be re-added by the RAs later. Of course the CIB should be checked to see if the removal was successful. Needs to be done for both nodes, or both master nameservers. Node is node1. See also crm_attribute(8).
# crm configure show node1 | tr " " "\n" |\
awk -F "=" 'NR>5 {print $1}' | while read; do \
crm_attribute --node node1 --name $REPLY --delete; done
* Example for removing non-reboot-safe node attribute from CIB, scale-up.
The attribute hana_<sid>_sync_state will be removed. Of course the CIB should be checked to see if the removal was successful. Needs to be done for both nodes. Scale-up only. Node is node1, SID is HA1. See also crm_attribute(8).
--lifetime reboot --query
# crm_attribute --node node1 --name hana_ha1_sync_state \
--lifetime reboot --delete
* Example for removing all SAPHanaSR property attributes from CIB, scale-out.
All attributes of porperty SAPHanaSR will be removed. Needed attributes are expected to be re-added by the RAs later. The attribute for srHook will be added by the susHanaSR.py HADR provider script and might be missing until the HANA system replication status changes. Of course the CIB should be checked to see if the removal was successful. Needs to be done once per Linux cluster. Scale-out only. See also SAPHanaSR-showAttr(8) and SAPHanaSR.py(7) or SAPHanaSrMultiTarget.py(7) respectively.
# crm configure show SAPHanaSR |\
awk -F"=" '$1~/hana_/ {print $1}' | while read; do \
crm_attribute --delete --type crm_config --name $REPLY; done
* Example for removing the SAPHanaSR.py hook script from global.ini and HANA.
The global.ini is copied for backup. Next the exact name (upper/lower case) of the section is determined from global.ini. Then the currenct HADR provider section is shown. If the section is identical with the shipped template, it can be removed easily from the configuration. Finally the HADR provider hook script is removed from running HANA. Needs to be done for each HANA site. SID is HA1, case sensitive HADR provider name is SAPHanaSR. The example is given for scale-up SAPHanaSR.py, for scale-out SAPHanaSrMultiTarget.py might be removed instead. The path /usr/sbin/ is used, because this step is done while the old RPM is still installed. See manual page SAPHanaSR.py(7) or SAPHanaSrMultiTarget.py(7) for details on checking the hook script integration.
~> cdcoc
~> cp global.ini global.ini.SAPHanaSR-backup
~> grep -i ha_dr_provider_saphanasr global.ini
~> /usr/sbin/SAPHanaSR-manageProvider --sid=HA1 --show \
--provider=SAPHanaSR
~> /usr/sbin/SAPHanaSR-manageProvider --sid=HA1 --reconfigure \
--remove /usr/share/SAPHanaSR/samples/global.ini
~> hdbnsutil -reloadHADRProviders
* Example for removing the SAPHanaSR.py hook script from sudoers.
Needs to be done on each node. The example is given for scale-up SAPHanaSR.py, for scale-out SAPHanaSrMultiTarget.py might be removed instead. See manual page SAPHanaSR.py(7) for details on checking the hook script integration.
# grep -v "$sidadm.*ALL..NOPASSWD.*crm_attribute.*$sid" \
"$SUDOER".angi-bak >$SUDOER
* Example for removing the SAPHanaSR and SAPHanaSR-doc package.
The packages SAPHanaSR and SAPHanaSR-doc are removed from all cluster nodes. Related packages defined by patterns and dependencies are not touched. Needs to be done once per Linux cluster. All nodes are checked whether the packages are not installed anymore. The example is given for scale-up SAPHanaSR, for scale-out SAPHanaSR-ScaleOut might be removed instead.
# crm cluster run "rpm -e --nodeps SAPHanaSR"
# crm cluster run "hostname; rpm -q SAPHanaSR-doc --queryformat %{NAME}"
# crm cluster run "hostname; rpm -q SAPHanaSR --queryformat %{NAME}"
* Example for installing the SAPHanaSR-angi package.
The package SAPHanaSR is installed on all cluster nodes. All nodes are checked for the package. Needs to be done once per Linux cluster.
"zypper --non-interactive in -l -f -y SAPHanaSR-angi"
# crm cluster run \
"hostname; rpm -q SAPHanaSR-angi --queryformat %{NAME}"
* Example for adding susHanaSR.py to sudoers.
Needs to be done on each node. See manual page susHanaSR.py(7) and SAPHanaSR-hookHelper(8).
* Example for adding susHanaSR.py to global.ini and HANA.
Needs to be done for each HANA site. See manual page susHanaSR.py(7) and SAPHanaSR-manageProvider(8).
* Example for adding angi SAPHanaController resource config to CIB.
Needs to be done once per Linux cluster. See manual page ocf_suse_SAPHanaController(7), SAPHanaSR_basic_cluster(7) and SUSE setup guides.
* Example for setting SAPHanaController resource out of maintenance.
First the SAPHanaController multi-state resource is refreshed,
then it is set out of maintenance. Name of the resource is
mst_SAPHanaController_HA1_HDB00. Of course status of cluster, HANA and
system replication needs to be checked before and after this action, see
example above. Needs to be done once per Linux cluster. See also manual page
SAPHanaSR_maintenance_examples(7).
Note: The srHook status for HANA secondary site migh be empty.
# crm resource maintenance mst_SAPHanaController_HA1_HDB00 off
* Example for testing RA on secondary site and trigger susHanaSR.py.
This step is optional. The secondary node is determined from SAPHanaSR-showAttr. On that node, the hdbnameserver is killed. The cluster will recover the secondary HANA and set the CIB attribute srHook. Of course status of cluster, HANA and system replication needs to be checked.
awk -F"/" '$1=="0 Host"&&$3=="score=\"100\"" {print $2}')
# echo $SECNOD
# ssh root@$SECNOD "hostname; killall -9 hdbnameserver"
FILES¶
- /etc/sudoers.d/SAPHanaSR
- recommended place for sudo permissions of HADR provider hook scripts
- /usr/sbin/ , /usr/bin/
- path to tools before the upgrade, after the upgrade
- /hana/shared/$SID/global/hdb/custom/config/global.ini
- on-disk representation of HANA global system configuration
- /usr/share/SAPHanaSR/samples/global.ini
- template for classical scale-up SAPHanaSR.py entry in global.ini
- /usr/share/SAPHanaSR-ScalOut/samples/global.ini
- template for classical scale-out SAPHanaSrMultiTarget.py entry in global.ini
- /usr/share/SAPHanaSR-angi/samples/global.ini_susHanaSR
- template for susHanaSR.py entry in global.ini
- /usr/share/SAPHanaSR-angi/samples/SAPHanaSR-upgrade-to-angi-demo
- unsupported script for demonstrating the procedure on a test cluster
REQUIREMENTS¶
* OS, Linux cluster and HANA are matching requirements for
SAPHanaSR, or SAPHanaSR-ScaleOut respectively, and SAPHanaSR-angi.
* The resource configuration matches a documented setup. Even if the general
upgrade procedure is expected to work for customised configuration, details
might need special treatment.
* The whole upgrade procedure is tested carefully and documented in detail
before being applied on production.
* Linux cluster, HANA and system replication are in sane state before the
upgrade. All cluster nodes are online.
* The HANA database is idle during the upgrade. No other changes on OS,
cluster, database or infrastructure are done in parallel to the upgrade.
* Linux cluster, HANA and system replication are checked and in sane state
before set back into production.
BUGS¶
In case of any problem, please use your favourite SAP support process to open a request for the component BC-OP-LNX-SUSE. Please report any other feedback and suggestions to feedback@suse.com.
SEE ALSO¶
SAPHanaSR-angi(7) , SAPHanaSR(7) ,
SAPHanaSR-ScaleOut(7) , ocf_suse_SAPHana(7) ,
ocf_suse_SAPHanaController(7) , SAPHanaSR.py(7) ,
SAPHanaSrMultiTarget.py(7) , susHanaSR.py(7) ,
SAPHanaSR-upgrade-to-angi-demo(8) ,
SAPHanaSR_maintenance_examples(7) , SAPHanaSR-showAttr(8) ,
crm(8) , crm_mon(8) , crm_attribute(8) ,
cibadmin(8) ,
https://documentation.suse.com/sbp/sap/ ,
https://www.suse.com/c/tag/towardszerodowntime/
AUTHORS¶
A.Briel, F.Herschel, L.Pinne.
COPYRIGHT¶
(c) 2024 SUSE LLC
This maintenance examples are coming with ABSOLUTELY NO WARRANTY.
For details see the GNU General Public License at
http://www.gnu.org/licenses/gpl.html
20 Jan 2025 |