Renewing SAS for Linux at the Command line

sudo /usr/local/SAS/SASFoundation/9.3/sassetup

That gives you this:

Welcome to SAS Setup, the program used to renew your SAS software.

Some conventions used throughout SAS Setup are:
       *   indicates which menu selection is the default choice
     ( )   indicates the default response to a prompt
       !   starts a sub-shell on your system
       h   displays help for a menu selection or prompt
       g   goes back to a previous menu selection or prompt
       q   quits SAS Setup at any point

Setup Utilities Menu
--------------------
* 1.  Renew SAS Software
   -------------------------------
   g: Go back   q: Quit    h: Help
   -------------------------------
Action? (1)

If you continue, you can specify your SETINIT file:

SAS Installation Data (SID) is a text file required to install a customized
version of SAS. The SID was e-mailed to your SAS Installation Representative. If
you would like to receive the SID via e-mail now, please use URL to retrieve it
before continuing the installation.

   http://support.sas.com/adminservices-SID

Specify the file containing SAS Installation Data.
-->/path/to/SAS93_XXXXXX_99999999_LINUX_X86-64.txt

Press return, and hopefully you'll see:

SAS Installation Data retrieved successfully.

_______________________________________________________________________________

Applying SAS Installation Data
Please wait...

SAS Installation Data application is complete.

Reference:

Ubuntu 64, SAS 9.2 & What I DO all Day
Usage Note 10838: When updating the SAS 9 license, "ERROR: 180-322: Statement is not valid or it is used out of proper order" error message appears

Solved: internal error boot orders have to be contiguous and starting from 1

I exported a VM from RHEV 3.0 to an export domain. Then I imported the VM into RHEV 3.1.

I was surprised that the VM would not start. Instead, I got this message:

Exit message: internal error boot orders have to be contiguous and starting from 1

Upon further inspection, the boot order information was completely missing from the imported KVM definition:

    <disk device="disk" snapshot="no" type="block">
      <source dev="/rhev/data-center/a1f867a3-237a-4be3-8e07-bf61f5b95280/71051d89-b9bd-4db5-8fe1-7c4499114874/images/d543a66c-4dd6-4d49-a8ea-7d65c137ce41/39bbb768-13d0-4bb8-ae93-a2c8611da655"/>
      <target bus="virtio" dev="vda"/>
      <serial>d543a66c-4ef6-4d49-a8ea-7d65c137ce41</serial>
      <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/>
    </disk>

I solved this by just editing the imported VM's disk in the RHEV UI. I changed the disk's properties from bootable to nonbootable and saved the change. Then I changed it back to bootable again. This wrote in the boot order:

    <disk device="disk" snapshot="no" type="block">
      <source dev="/rhev/data-center/a1f867a3-237a-4be3-8e07-bf61f5b95280/71051d89-b9bd-4db5-8fe1-7c4499114874/images/d543a66c-4dd6-4d49-a8ea-7d65c137ce41/39bbb768-13d0-4bb8-ae93-a2c8611da655"/>
      <target bus="virtio" dev="vda"/>
      <serial>d543a66c-4ef6-4d49-a8ea-7d65c137ce41</serial>
      <boot order="1"/>
      <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/>
    </disk>

I think this happened because the VM had a CD mounted in the RHEV 3.0 data center that was unavailable in the RHEV 3.1 data center.

See also: https://bugzilla.redhat.com/show_bug.cgi?id=874952

Topic: 

Using a Proxy to Access EPEL from an Internal Network

I had some RHEL6 boxes on an internal network that had no access to the internet. But I wanted to install packages from EPEL via yum. The answer was to set up a proxy server and tell these internal boxes to use the proxy. Approach:

  1. Set up Squid proxy on a server that has access to the internet

  2. Configure Squid to only accept requests from my network
  3. Configure Squid to require a username and password, even on my network
  4. Install EPEL repository settings on the client
  5. Tell client to use the proxy

Set Up Squid

I'm using RHEL6. So installing Squid is just yum install squid and ensuring it will start up when the box is booted is chkconfig squid on.

Lock Down Squid

My paranoia level is high, so I commented out all the example rules and only added my network, 198.51.100.0/24:

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
#acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
#acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
#acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
#acl localnet src fc00::/7       # RFC 4193 local private network range
#acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines
acl localnet src 198.51.100.0/24 # My internal network

Just because, I commented out all the ports except 80 and 443, too.

Require a Username and Password

Even though it's on my local network, I wanted the proxy to require authentication. I'm not very concerned about encryption here so I used HTTP Basic authentication, which means I had to tell Squid to use the plugin that supports it. I added the following to the top of /etc/squid/squid.conf:

# Tell Squid to use ncsa_auth
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squidcredentials
auth_param basic realm Squid
acl authenticated_acl proxy_auth REQUIRED

I also changed the line

http_access allow localnet

to

http_access allow localnet authenticated_acl

This tells squid that clients on the 198.51.100.0 network must authenticate to use the proxy.

Then I created the file at /etc/squid/squidcredentials. This file holds the username and password:

htpasswd -c /etc/squid/squidcredentials foo
New password: mysecretpassword
Re-type new password: mysecretpassword
Adding password for user foo

A hole needs to be poked in the firewall to allow hosts on the internal network to reach squid on port 3128:

iptables -I INPUT 4 -p tcp -s 198.51.100.0/24 --dport 3128 -m state --state NEW -j ACCEPT
service iptables save

Squid needs to read the changes in /etc/squid/squid.conf, and an easy way to do that is to restart squid:

service squid restart
Stopping squid: ..................................................
Starting squid:                                            [  OK  ]

Install EPEL Repository Settings on the Client

This part is easy. Get the EPEL repository rpm and move it onto the client. Then install it with rpm:

rpm -Uvh epel-release-6-8.noarch.rpm

Tell Client to Use the Proxy

The epel-release installation placed a file in /etc/yum.repos.d/epel.repo. Edit this file and add the following three lines to the end of the [epel] section:

proxy=http://username:password@proxy.example.com:3128/

where proxy.example.com is the IP or DNS name of the proxy server that was set up.

If everything went well, you can now use yum update on the client and it will happily find the EPEL repository:

# yum update
epel/metalink                                           |  14 kB     00:00    
epel                                                    | 4.3 kB     00:00    
epel/primary_db                                         | 5.0 MB     00:34    
Setting up Update Process
No Packages marked for Update

If there is a typo in the password on the client, instead of the above you'll see something like

Could not get metalink https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=x86_64 error was
14: PYCURL ERROR 22 - "The requested URL returned error: 407"

References:

Install Squid Proxy Server on CentOS / Redhat enterprise Linux 5
Enabling basic authentication in Squid

Solved: iSCSI disconnects and timeouts after successful login

Consider the following from /var/log/messages:

iscsid: Connection16:0 to [target: iqn.2012-12.com.example:fooportal, portal: 198.51.100.3,3260] through [iface: default] is operational now
kernel: sde:
kernel: connection16:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4379432620, last ping 4379437620, now 4379442620
kernel: connection16:0: detected conn error (1011)
iscsid: Kernel reported iSCSI connection 16:0 error (1011 - ISCSI_ERR_CONN_FAILED: iSCSI connection failed) state (3)
iscsid: connection16:0 is operational after recovery (1 attempts)
kernel: connection16:0: ping timeout of 5 secs expired, recv timeout 5, last rx 4379445875, last ping 4379450875, now 4379455875
kernel: connection16:0: detected conn error (1011)
iscsid: Kernel reported iSCSI connection 16:0 error (1011 - ISCSI_ERR_CONN_FAILED: iSCSI connection failed) state (3)

As you can see, login to the iSCSI target was successful. But shortly thereafter, the client becomes unhappy and the connection fails out, only to be reinstated and disconnected repeatedly.

In my case, the problem ended up being jumbo frames. I diagnosed it by doing a wireshark capture of the bonded interface on the client, which revealed the following message:

scsi transfer limited due to allocation_length too small

and showed the message

[TCP Retransmission] SCSI: Read(10) LUN: 0x01 (LBA: 0x00000000, Len: 8)

Turning off jumbo frames in the bonded interfaces on both ends of the connection solved the problem.

This was happening because the switch I was going over (HP ProCurve 2910al) does not have jumbo frames enabled for the default VLAN:

# show vlan

Status and Counters - VLAN Information

  Maximum VLANs to support : 256                 

  VLAN ID Name                             | Status     Voice Jumbo
  ------- -------------------------------- + ---------- ----- -----
  1       DEFAULT_VLAN                     | Port-based No    No  

The ultimate solution was to create a separate VLAN on the switch and enable jumbo frames on the new VLAN. After that, everything worked swimmingly.

Topic: 

Pages

Subscribe to SysArchitects RSS