Monday, February 14, 2011

Rolling Back a Root Pool Snapshot From a Failsafe Boot

This procedure assumes that existing root pool snapshots are available. In this example, the root pool snapshots are available on the local system

# zfs snapshot -r rpool@0730
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 6.87G 60.1G 37K /rpool
rpool@0730 18K - 37K -
rpool/ROOT 3.87G 60.1G 18K legacy
rpool/ROOT@0730 0 - 18K -
rpool/ROOT/zfs1008BE 3.87G 60.1G 3.82G /
rpool/ROOT/zfs1008BE@0730 52.3M - 3.81G -
rpool/dump 1.00G 60.1G 1.00G -
rpool/dump@0730 16K - 1.00G -
rpool/export 52K 60.1G 19K /export
rpool/export@0730 15K - 19K -
rpool/export/home 18K 60.1G 18K /export/home
rpool/export/home@0730 0 - 18K -
rpool/swap 2.00G 62.1G 16K -
rpool/swap@0730 0 - 16K -

1. Shutdown the system and boot failsafe mode.

ok boot -F failsafe
Multiple OS instances were found. To check and mount one of them
read-write under /a, select it from the following list. To not mount
any, select 'q'.

1 /dev/dsk/c0t1d0s0 Solaris 10 xx SPARC
2 rpool:5907401335443048350 ROOT/zfs1008

Please select a device to be mounted (q for none) [?,??,q]: 2
mounting rpool on /a

2. Roll back the individual root pool snapshots

# zfs rollback rpool@0730
# zfs rollback rpool/ROOT@0730
# zfs rollback rpool/ROOT/zfs1008BE@0730
# zfs rollback rpool/export@0730
.

Current ZFS snapshot rollback behavior is that recursive snapshots are not rolled back with the -r option. You must roll back the individual snapshots from the recursive snapshot.

3. Reboot back to multiuser mode
# init 6

Solaris Flash archive

Solaris Flash archive can be used to backup a complete system or install from new machines from the archives [cloning a system]. With Flash installation, you can create a single reference installation of the Solaris OS on one machine & that can be replicated as a new installation on any number of systems, called the Clones. This document shows how to create a flash archive image.

After you install the system with all packages you need and the data you need in it, you create the Flash archive. All the files on the system are copied to the archive along with various pieces of identifying information. Command "flarcreate" is used to create the archive. The flarcreate command requires the -n name option and a file name for the archive.

Example: The -n options specifies the server name and -c option enables data compression

# flarcreate -n mobile_server -c /export/home/mobilebackup.flar

Determining which filesystems will be included in the archive...
Determining the size of the archive...
The archive will be approximately 3.12GB.
Creating the archive...
8713899 blocks
Archive creation complete.

ZFS : Basic administration guide

ZFS Pool:

ZFS organizes physical devices into logical pools called storage pools. Both individual disks and array logical unit numbers (LUNs) that are visible to the operating system can be included in a ZFS pools. These pools can be created as disks striped together with no redundancy (RAID 0), mirrored disks (RAID 1), striped mirror sets (RAID 1 + 0), or striped with parity (RAID Z). Additional disks can be added to pools at any time but they must be added with the same RAID level.

ZFS Filesystem :

ZFS offers a POSIX-compliant file system interface to the Solaris/OpenSolaris operating system. ZFS file systems must be built in one and only one storage pool, but a storage pool may have more than one defined file system. ZFS file systems are managed & mounted through /etc/vfstab file. The common way to mount a ZFS file system is to simply define it against a pool. All defined ZFS file systems automatically mount at boot time unless otherwise configured.

Here are the basic commands for getting started with ZFS

Creating Storage pool using "zpool create" :

bash-3.00# zpool create demovol raidz c2t1d0 c2t2d0
bash-3.00# zpool status
pool: demovol
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
demovol ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
c2t2d0 ONLINE 0 0 0

errors: No known data errors
bash-3.00#

"zfs list" will give the details of the pool and other zfs filesytems.

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
demovol 1.00G 900G 38.1K /demovol
bash-3.00#

Creating File Systems : "zfs create" is used to create zfs filesytem.

bash-3.00# zfs create demovol/testing
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
demovol 1.00G 900G 38.1K /demovol
demovol/testing 32.6K 900G 32.6K /demovol/testing
bash-3.00#

bash-3.00# ls /dev/zvol/dsk/demovol -- This should show you the disk file.

Setting Quota for the filesytem : Until Quota is set, the filesytem shows the total available space of the containter zfs pool.

bash-3.00# zfs set quota=10G emspool3/testing
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
demovol 1.00G 900G 39.9K /demovol
demovol/testing 32.6K 10.0G 32.6K /demovol/testing

Creating a snapshot :

bash-3.00# zfs snapshot demovol/testing@snap21
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
demovol 1.00G 900G 39.9K /demovol
demovol/testing 32.6K 10.0G 32.6K /demovol/testing
demovol/testing@snap21 0 - 32.6K -
bash-3.00#

Get all properties of a ZFS filesytem :

bash-3.00# zfs get all demovol/testing
NAME PROPERTY VALUE SOURCE
demovol/testing type filesystem -
demovol/testing creation Mon Feb 9 9:05 2009 -
demovol/testing used 32.6K -
demovol/testing available 10.0G -
demovol/testing referenced 32.6K -
demovol/testing compressratio 1.00x -
demovol/testing mounted yes -
demovol/testing quota 10G local
demovol/testing reservation none default
demovol/testing recordsize 128K default
demovol/testing mountpoint /demovol/testing default
..


Cloning a ZFS filesystem from a snapshot :

bash-3.00# zfs clone demovol/testing@snap21 demovol/clone22
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
demovol 1.00G 900G 39.9K /demovol
demovol/clone22 0 900G 32.6K /demovol/clone22
demovol/testing 32.6K 10.0G 32.6K /demovol/testing
demovol/testing@snap21 0 - 32.6K -
bash-3.00#

Performance IO Monitoring the ZFS storage pool:

bash-3.00# zpool iostat 1
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
demovol 4.95M 900G 0 0 0 35
demovol 4.95M 900G 0 0 0 0
demovol 4.95M 900G 0 0 0 0
demovol 4.95M 900G 0 0 0 0

Please refer to the man pages, zfs and zpool, for more detailed information. Additional

Solving Mirrored {Root} Pool Problems (zpool attach)

If you cannot attach a disk to create a mirrored root or non-root pool with the zpool attach command, and you see messages similar to the following:

# zpool attach rpool c1t1d0s0 c1t0d0s0
cannot attach c1t0d0s0 to c1t1d0s0: new device must be a single disk

You might be running into CR 6852962, which was been seen in an LDOM environment.If the problem is not 6852962 and the system is booted under a virtualization product, make sure the devices are accessible by ZFS outside of the virtualization product

Creating a Pool or Attaching a Disk to a Pool (I/O error)
If you attempt to create a pool or attach a disk or a disk slice to a existing pool and you see the following error

# zpool attach rpool c4t0d0s0 c4t1d0s0
cannot open '/dev/dsk/c4t1d0s0': I/O error

This error means that the disk slice doesn't have any disk space allocated to it or possibly that a Solaris fdisk partition and the slice doesn't exist on an x86 system. Use the format utility to allocate disk space to a slice. If the x86 system doesn't have a Solaris fdisk partition, use the fdisk utility to create one.


Panic/Reboot/Pool Import Problems
During the boot process, each pool must be opened, which means that pool failures might cause a system to enter into a panic-reboot loop. In order to recover from this situation, ZFS must be informed not to look for any pools on startup.

Boot From Milestone=None Recovery Method
Boot to the none milestone by using the -m milestone=none boot option

ok boot -m milestone=none

Remount your root file system as writable.
Rename or move the /etc/zfs/zpool.cache file to another location.

These actions cause ZFS to forget that any pools exist on the system, preventing it from trying to access the bad pool causing the problem. If you have multiple pools on the system, do these additional steps:

* Determine which pool might have issues by using the fmdump -eV command to display the pools with reported fatal errors.
* Import the pools one-by-one, skipping the pools that are having issues, as described in the fmdump output.

If the system is back up, issue the svcadm milestone all command.

ZFS Mirrored Root Pool Disk Replacement

If a disk in a mirrored root pool fails, you can either replace the disk or attach a replacement disk and then detach the failed disk. The basic steps are like this:

Identify the disk to be replaced by using the zpool status command.
You can do a live disk replacement if the system supports hot-plugging. On some systems, you might need to offline and unconfigure the failed disk first. For example:

# zpool offline rpool c1t0d0s0
# cfgadm -c unconfigure c1::dsk/c1t0d0
Physically replace the disk.

Reconfigure the disk. This step might not be necessary on some systems.

# cfgadm -c configure c1::dsk/c1t0d0
Confirm that the replacement disk has an SMI label and a slice 0 to match the existing root pool configuration.
Let ZFS know that the disk is replaced.

# zpool replace rpool c1t0d0s0
Bring the disk online.
# zpool online rpool c1t0d0s0
Install the bootblocks after the disk is resilvered.
Confirm that the replacement disk is bootable by booting the system from the replacement disk.

Saturday, February 12, 2011

questions with answers

Solaris Questions:
How to view the kernel (shmmax)parameter value in solaris 10? (sysdef -i)
What are the main difference between solaris 10 & 9?
The main difference in solaris 9 & solaris 10 is "SMF (Solaris management Facility)" . In solaris 9, if any service goes down then we should restart all services.this is the disadvantage. But in solaris 10, if any service goes down then that particular service we can select and enable it instead of restarting all services.
============================================================
The automount facility contains three components
The AutoFS FS
The automountd daemon
The automount command

The AutoFs map types

Master Map The auto_master map associates a directory, also called a mount point, with a map.
Direct Map Lists the mount points as absolute path names. This map explicitly indicates
the mount point on the client.

Indirect Map Lists the mount points as relative path names. This map uses a relative path to
establish the mount point on the client.
Special Provides access to NFS servers by using their host names
===============================================================================

What is the command to do an interactive boot from the ok prompt?
stop+a command is to boot an interactive boot from the ok prompt.

Which NFS daemons are found on the NFS server?
In NFS server side there are 4 daemons They are nfsd
mountd
lockd
statd
nfslogd

These five daemons will be in NFS server.
statd and lockd will be in NFS client too
.

What file controls system wide password aging?
/etc/shadow.

What command will display the VTOC for disk c0t0d0s0?
prtvtoc /dev/rdsk/c0t0d0s0

Where are the templates stored that are copied into the user's home directories for their personal customizations?
/etc/skel

How do we know how many LAN cards we have in server?

Give the command that will display your default boot device?
eeprom boot-device
Solaris Questions:
How to view the kernel (shmmax)parameter value in solaris 10? (sysdef -i)
What are the main difference between solaris 10 & 9?
The main difference in solaris 9 & solaris 10 is "SMF (Solaris management Facility)" . In solaris 9, if any service goes down then we should restart all services.this is the disadvantage. But in solaris 10, if any service goes down then that particular service we can select and enable it instead of restarting all services.
============================================================
The automount facility contains three components
The AutoFS FS
The automountd daemon
The automount command

The AutoFs map types

Master Map The auto_master map associates a directory, also called a mount point, with a map.
Direct Map Lists the mount points as absolute path names. This map explicitly indicates
the mount point on the client.

Indirect Map Lists the mount points as relative path names. This map uses a relative path to
establish the mount point on the client.
Special Provides access to NFS servers by using their host names
===============================================================================

What is the command to do an interactive boot from the ok prompt?
stop+a command is to boot an interactive boot from the ok prompt.

Which NFS daemons are found on the NFS server?
In NFS server side there are 4 daemons They are nfsd
mountd
lockd
statd
nfslogd

These five daemons will be in NFS server.
statd and lockd will be in NFS client too
.

What file controls system wide password aging?
/etc/shadow.

What command will display the VTOC for disk c0t0d0s0?
prtvtoc /dev/rdsk/c0t0d0s0

Where are the templates stored that are copied into the user's home directories for their personal customizations?
/etc/skel

How do we know how many LAN cards we have in server?

Give the command that will display your default boot device?
eeprom boot-device
prtconf -vp |grep -i boot


What command can you use to dispaly all of your groups
groups - To display full list

What is the command can reconfigure devices without reboot?
devfsadm

What are the different phases in boot process?
Boot Phases of solaris operating environment are:
1. Boot PROM
2. boot programs like bootblk, ufsboot
3.kernel initialization like loading modules
4. init phase.

How many cpu's we can connect to a sprac machines?
Sin Fire 15k can have upto max of 106 processors

List the hidden files in current directory?
ls -al |grep "^\."

How to find 32 or 64 bit system instances of OS?
isainfo -b

How to configure IP Multipathing.?

How would you find out what version of the solaris is currently running?
uname -r is the command to know the version of the OS and uname -s for the type of OS.

How do you find out drive statistics ? – iostat -E
Display Ethernet Address arp table ? – arp -a
Display the inter-process communication facility status ? – ipcs
Alternative for top command ? - prstat -a
Display the top most process utilizing most CPU ? - top

Given an ISO image how you mount in solaris ?
Create a loopback device file with lofiadm:

acadie# /usr/sbin/lofiadm -a /path/to/image.iso
This will create, for example, /dev/lofi/1 . It can be mounted as follows:
acadie# mount -F hsfs -o ro /dev/lofi/1 /mnt/dir
------
lofiadm -a /export/temp/software.iso /dev/lofi/1
The lofi device creates a block device version of a file. This block device can be mounted to /mnt with the following command.
mount -F hsfs -o ro /dev/lofi/1 /nt
=========================================================================================================

ZFS
Which command(s) would you use to setup the ZFS file system?
zfscreate
FEATURES OF ZFS
How to create ZFS File System
How to create ZFS snapshot
Restore from a ZFS snapshot
===================================================================================

ZONES :
Explain Zone Features & types of Zones
Features of Global Zone
1.Solaris Always boots(cold/warm) to the global zone.
2.Knows about All Hardware devices attach to system
3.Knows about all non global Zones
Features of Non-Global Zones.
1.Installed at a location on the filesystem of the Global Zone
'Zone root path' /export/home/zones/zones1 {Zone2,Zone3----} this is as root directory for this zones.
2.Share Packages with Global Zone.
3.Manage distinct hostname and table files.
4.cannot communicate with other non-global zones by default.NIC must be used, which means use standard network API(TCP)
5.Global Zone admin can delegatenon-global zone administration

Zones Types:-

1.sparse Root Zones - share key fileswith global zones.
2.Whole Root Zones - require more storage


Zone Daemons and its functions
Where is zone configuration location?
In zones what is Whole root model & Sparse root model?
Creating a zone (sparse root)
Cloning a Zone
=======================================================================
Cloning works only from release 11/06.
Assume that a zone called apple is created and we are cloning to a new zone called orange. To clone the apple zone , the apple zone must be in the installed state.
#zoneadm –z apple halt
#zonecfg –z apple export –f /etc/zones/apple.txt ? Copies the apples configuration to a text file.
#vi /etc/zones/apple.txt ? edit the zonepath and give a new path for the new zone named orange and issue a new ipaddress and save the file
#zonecfg –z orange –f /etc/zones/apple.txt
#zoneadm –z orange clone apple
#zoneadm –z orange boot
#zlogin –C orange
The same data and users are found in the orange zone like what we created in apple zone

===============================================================================================

What command can you use to dispaly all of your groups
groups - To display full list

What is the command can reconfigure devices without reboot?
devfsadm

What are the different phases in boot process?
Boot Phases of solaris operating environment are:
1. Boot PROM
2. boot programs like bootblk, ufsboot
3.kernel initialization like loading modules
4. init phase.

How many cpu's we can connect to a sprac machines?
Sin Fire 15k can have upto max of 106 processors

List the hidden files in current directory?
ls -al |grep "^\."

How to find 32 or 64 bit system instances of OS?
isainfo -b

How to configure IP Multipathing.?

How would you find out what version of the solaris is currently running?
uname -r is the command to know the version of the OS and uname -s for the type of OS.

How do you find out drive statistics ? – iostat -E
Display Ethernet Address arp table ? – arp -a
Display the inter-process communication facility status ? – ipcs
Alternative for top command ? - prstat -a
Display the top most process utilizing most CPU ? - top

Given an ISO image how you mount in solaris ?
Create a loopback device file with lofiadm:

acadie# /usr/sbin/lofiadm -a /path/to/image.iso
This will create, for example, /dev/lofi/1 . It can be mounted as follows:
acadie# mount -F hsfs -o ro /dev/lofi/1 /mnt/dir
------
lofiadm -a /export/temp/software.iso /dev/lofi/1
The lofi device creates a block device version of a file. This block device can be mounted to /mnt with the following command.
mount -F hsfs -o ro /dev/lofi/1 /nt
=========================================================================================================

ZFS
Which command(s) would you use to setup the ZFS file system?
zfscreate
FEATURES OF ZFS
How to create ZFS File System
How to create ZFS snapshot
Restore from a ZFS snapshot
===================================================================================

ZONES :
Explain Zone Features & types of Zones
Features of Global Zone
1.Solaris Always boots(cold/warm) to the global zone.
2.Knows about All Hardware devices attach to system
3.Knows about all non global Zones
Features of Non-Global Zones.
1.Installed at a location on the filesystem of the Global Zone
'Zone root path' /export/home/zones/zones1 {Zone2,Zone3----} this is as root directory for this zones.
2.Share Packages with Global Zone.
3.Manage distinct hostname and table files.
4.cannot communicate with other non-global zones by default.NIC must be used, which means use standard network API(TCP)
5.Global Zone admin can delegatenon-global zone administration

Zones Types:-

1.sparse Root Zones - share key fileswith global zones.
2.Whole Root Zones - require more storage


Zone Daemons and its functions
Where is zone configuration location?
In zones what is Whole root model & Sparse root model?
Creating a zone (sparse root)
Cloning a Zone
=======================================================================
Cloning works only from release 11/06.
Assume that a zone called apple is created and we are cloning to a new zone called orange. To clone the apple zone , the apple zone must be in the installed state.
#zoneadm –z apple halt
#zonecfg –z apple export –f /etc/zones/apple.txt ? Copies the apples configuration to a text file.
#vi /etc/zones/apple.txt ? edit the zonepath and give a new path for the new zone named orange and issue a new ipaddress and save the file
#zonecfg –z orange –f /etc/zones/apple.txt
#zoneadm –z orange clone apple
#zoneadm –z orange boot
#zlogin –C orange
The same data and users are found in the orange zone like what we created in apple zone

===============================================================================================

NFS server configuration on solaris 10

The network file system (NFS)

NFS iss the system that can be used to access file systems over the network. NFS version 4 is the default NFS in Solaris 10. The NFS service is managed by the Service Management Facility. That means NFS can be managed (enabled, disabled, or restarted) by the svcadm command, and the status of NFS service can be obtained by using the svcs command. The benefit here is sharing files over the network among computers possibly running different operating systems.

The NFS Service

The NFS service is a network service that enables computers of different architectures running different operating systems to share file systems across the network. A wide spectrum of operating systems ranging from Windows to Linux/UNIX support NFS. It has become possible to implement the NFS environment on a variety of operating systems because it is defined as an abstract model of a file system, rather than an architectural specification. Each operating system applies the NFS model to its specific file system semantics. This means that file system operations such as reading and writing work for the users as if they were accessing a file on the local system.

The benefits of the NFS service are described here:

1) It enables users on the network to share data, because all computers on the network can access the same set of files.

2) It reduces storage costs by letting computers share applications and common files instead of needing local disk space on each computer for each common file and user application.

3) It provides data consistency and reliability, because all users can read the same set of files, and whenever changes are made, they are made only at one place.

4) It makes the mounting of file systems accessing the remote files transparent to users.

5) It supports heterogeneous environments and reduces system administration overhead.

NFS is a network service offered in the client/server environment

NFS Servers and Clients

The NFS is a client/server system, the terms client and server refer to the roles that computers assume in sharing resources (file systems in this case) on the network. In NFS, computers that make their file systems available over the network and thereby offer NFS service to serve the requested files are acting as NFS servers, and the computers that are accessing the file systems are acting as NFS clients. In the NFS framework, a computer on a network can assume the role of a client, a server, or both.

Here is how NFS works:

A server makes a file system on its disk available for sharing, and the file system can then be accessed by an NFS client: on the network.

A client accesses files on the server's shared file system by mounting the file system.

The client does not make a copy of the file system on the server; instead, the mounting process uses a series of remote procedure calls that enable the client to access the file system transparently on the server's disk. To the user, the mounting works just like a mount on the local machine.

Once the remote file system (on the server) is mounted on the client machine, the user types commands as though the file systems were local.

You can mount an NFS file system automatically with autoFS.

The NFS File Systems

In most UNIX system environments, a file hierarchy that can be shared by using the NFS service corresponds to a file system or a portion of a file system. However, a file system resides on a single operating system, and NFS support works across operating systems. Moreover, the concept of a file system might be meaningless in some non-UNIX environments. Therefore, the term file system in NFS refers to a file or a file hierarchy that can be shared and mounted in the NFS environment.

An NFS server can make a single file or a directory subtree (file hierarchy) available to the NFS service for sharing. A server cannot share a file hierarchy that overlaps with a file hierarchy that is already being shared. Note that peripheral devices such as modems and printers cannot be shared under NFS.

Managing NFS

Since the release of Solaris 9, the NFS server starts automatically when you boot the system. Nevertheless, you do need to manage NFS, which includes administering the NFS service, working with NFS daemons, and making file systems available for sharing.

Administering the NFS Service

When the system is booted, the NFS server is automatically started by executing the nfs.server scripts. However, when the system is up, you may need to stop the service or start it again for whatever reason without rebooting the system. For that, you need to know that the NFS service is managed by the Service Management Facility (SMF) under the identifier network/nfs/server. By means of this identifier, you can find the status of the service by using the svcs command, and you can start (enable) or stop (disable) the service by using the svcadm command.

You can determine whether the NFS service is running on your machine by issuing the command shown here:

# svcs network/nfs/server

This command displays whether the NFS service is online or disabled. If you want to stop (disable) the service, issue the following command:

# svcadm disable network/nfs/server

You can start the service by issuing the following command:

# svcadm enable network/nfs/server

When the system is up, some daemons are running to support the NFS service.

Working with NFS Daemons

Since the release of Solaris 9, NFS service starts automatically when the system is booted. When the system goes into level 3 (or multiuser mode), several NFS daemons are started to support the service.

Daemons automatically started in NFS version 4 when the system boots Daemon

Description

automountd - Handles mount and unmount requests from the autofs service.

nfsd - Handles file system requests from clients.

nfs4cbd - Manages the communication endpoints for the NFS version 4 callback program.

nfsmapid - Provides integer-to-String and string-to-integer conversions for the user ID (UID) and the group ID (GID).

The nfsd daemon handles the file system requests from the client and is automatically started with option -a. You can change the parameters of the command by editing the /etc/default/nfs file. The syntax for the nfsd command is as follows:

nfsd [-a] [-c {#_conn}] [-l {listenBacklog}] [-p {protocol}] [-t {device}]
[{nservers}]

The options and parameters are described here:

-a. Start the daemon over all available connectionless and connection-oriented transport protocols such as TCP and UDP. This is equivalent to setting the NFSD_PROTOCOL parameter in the nfs file to ALL.

-c (#_conn.) Set the maximum number of connections allowed to the NFS server over connection-oriented transport protocols such as TCP. By default, the number is unlimited. The equivalent parameter in the nfs file is NFSD_MAX_CONNECTIONS.

-l (listenBacklog). Set the connection queue length (specified by (listenBacklog)) for the number of entries for the NFS TCP. The default value is 32. This number can also be determined by setting the NFSD_LISTEN_BACKLOG parameter in the nfs file.

-p (protocol). Start the daemon over the protocol specified by (protocol). The default in NFS version 4 is TCP. The equivalent parameter in the nfs file is: NFSD_PROTOCOL.

-t (device). Start an nfs daemon for the transport specified by . The equivalent parameter in the nfs file is: NFSD_DEVICES.

(nservers). Set the maximum number of concurrent requests from the clients that the NFS server can handle. The equivalent parameter in the nfs file is: NFSD_SERVERS.


The default NFS version is version 4 in Solaris 10, Unlike previous versions of NFS, NFS version 4 does not use these daemons: lockd, mountd, nfslogd, and statd

Sharing File Systems

On the server machine, you can make a file system available for sharing by using the share command on the machine. You can use this command manually for testing purpose or to make a file system available only until the system is rebooted. If you want to make the sharing of a file system permanent and automatic, you should enter the share command into the /etc/dfs/dfstab file. Each entry of this file is a share command, and this file is automatically executed at boot time when the system enters run level 3. The syntax for the share command is shown here:

share [-F (FSType)] [-o (specificOptions)] [-d (description)] [(pathname)]

The options are described here:

-F (FSType). Specifies the file system type, such as nfs.

-o (specificOptions). The (specificOptions) specifies the options for controlling access to the shared file system. The possible values for (specificOptions) are as follows:

rw. Read/write permissions for all clients. This is the default behavior.

rw = (client1):(client2). . . . Read/write permission for the listed clients; no access for any other client.

ro. Read-only permission for all clients.

ro = (client1):(client2). . . . Read-only permission for the listed clients; no access for any other client.

-d (description). The (description) specifies the description for the shared resource.

If you want to know the resources being shared on the local server, issue the dfshares command without any arguments or options.


Files related to the NFS service

/etc/default/autofs - Configuration information for autofs.

/etc/default/fs - Lists the default file system type for local file systems.

/etc/default/nfs - Configuration information for the nfsd daemon.

/etc/dfs/dfstab - Contains a list of local resources to be shared; the share commands.

/etc/mnttab - Lists file systems that are currently mounted.

/etc/dfs/sharetab - Lists the local and remote resources that are shared.

/etc/vfstab - Defines file systems to be mounted locally.

Some Examples how to share files in NFS

# vi /etc/dfs/dfstab

share -F nfs -o ro,anon=0 /cdrom/sol_10_305_sparc/s0/Solaris_10/Tools - to share the cdrom OS software and read only permission.

share -F nfs -o rw,anon=0 /cdrom - to share files with read and write permission and anon=0 means access to all hosts.

share -F nfs -o rw=hostname1 /cdrom - to give access to only one host.

share -F nfs -o rw=-hostname1 /cdrom - to deny this hostname1 and access to all.

share -F nfs -o rw=hostname1 hostname2 /cdrom - access to hostname1 and hostname2

share -F nfs -o rw=-hostname1 -hostname2 /cdrom - deny hostname1 and hostname2 and allow access to all computer in the network.

wq!

# Shareall (or)
#exportfs -va - to export the filesystem

#share - to see the files shared in nfs and which are exported currently

Client side mount the shared File system

# mount -f nfs hostname1:/cdrom /cdrom - mount shared file directory to local directory.

# cd /cdrom
# ls

solaris interview question Raid levels and File systems

Q) How to create raid 0 concatination

A) # metainit d0 2 1 c0t0d0s1 1 c0t1d0s1

Q)How to see the meta device information

A)# metastat

Q)How to format and mount a slice

A)newfs /dev/md/rdsk/d0
Mount /dev/md/dsk/d0 /nav

Q) how to create raid 0 stripping

A) #metainit d1 1 2 c0t0d0s1 c0t0d0s2
#metastat
#newfs /dev/md/rdsk/d1
#mount /dev/md/dsk/c1 /naveen

Q) How to differentiate concatenation and stripping

A)when used metastat command stripping will show an interlace value 32KB this shows it is striped

Q)how to clear metadevices

A)#metaclear d0
#metaclear d1

Q)How to create mirroring raid -1
#metainit d1 1 1 c0t0d0s1
Metainit d2 1 1 c0t0d0s2
Metainit d3 –m d1
Metattach d3 d2
Metastat

Q)how to create stripe with parity
#metainit d1 –raid5 c0t0d0s0 c0t0d0s0 c0t2d0s0
Metastat
Newfs /dev/md/rdsk/d1
Mount /dev/md/rdsk/d1 /naveen

Q)how to grow the size of the volume
#growfs –M /d1 c0t0d0s1-(new device)

Q)how to create raids using veritas volume manager concationation

A)#vxassit –g rootdg make vol01 20g
#newfs /dev/vx/rdsk/rootdg/vol01
#mount /dev/vx/dsk/rootdg/vol01 /naveen

Stripping
A)
vxassit –g rootdg make vol02 20g layout=stripe st_width=32
#newfs /dev/vx/rdsk/rootdg/vol02
#mount /dev/vx/dsk/rootdg/vol02 /naveen

mirroring

A)vxassit –g rootdg make vol03 20g layout=mirror
newfs /dev/vx/rdsk/rootdg/vol03
mount /dev/vx/dsk/rootdg/vol03 /naveen

stripping with parity.

A)vxassit –g rootdg make vol4 20g layout=raid5,nologs
newfs /dev/vx/rdsk/rootdg/vol4
mount /dev/vx/dsk/rootdg/vol4.

Q)How to print plexes,subdisk,volumes

Vxprint –pt – for plexes
Vxprint –st subdisk
Vxprint –vt volumes

Q)how to increase the size of the volume

Vxassit –r rootdg growby vol01 20g
/usr/lib/fs/fsck –F ufs –M /naveen /dev/vx/rdsk/rootdg/vol01 40980 (offset value by typing the vxprint –vt command)

Q)what is the top command used for

A)Its lists all the processes with the process id
Cpu utilization and idle cpu
Memory utilization and idle memory
Swap utilization and ideal swap
Application using maximum cpu utilization with pid

Q)what is lofs used for

A)Lofs list open files is used for to check a problem encounter with a file or process

Q)what is the use of truss command

A)Trace system call for every running file

30)Whats are the fields in vfstab

A)mount device-raw device for fsk-mountpoint-filesystem-fsckcheck-mount at boot
and it contain all the mounted filesystem
==============================================
How to create 2 database replicas
# metadb –a –f –c 2 c0t0d0s1

Q) How to see the state databases replica.
A) # metabd

Q) what are raid0,raid1 and raid5
A) Raid 0 is concatenation or stripping
Concatenation means writing data in disk one after another
Stripping means writing data of 32kbs interlease value in to disk

Raid 1 – mirroring that means writing data on two disk parallely or duplicatiung the data on two disk.

Raid 5 – stripping with parity the data of 2 disk is duplicated in the third disk with parity information
======================================
How to configure newly attached hardware like hardsik network card.
# devfsadm Or drvconfig

How to set environment variable in NVRAM
# nvalias /pci---/rarp – to set scsi for booting of client
# nvalias net dhcp – to boot from dhcp
# nvunalias net – remove the alias

How to see the physical disk connected to the system from Open Boot Command OBP
ok>probe-scsi

How do you set a deafaulr boot device from the boot prom
# setenv boot-device disk
Boot-device=disk
# reset - To make the changes.

How to see the default boot device from the boot prom or OBP
ok> prtenv boot-device

How to see which kernel version and artitecture used by the system.
# isainfo -v :- To see kernal version and 32 bit or 64 bit system and artictect

How to make processor 3 online when you have more then 4 processor and processor 3 is offline?
# psrinfo –v:- To see the status of all processors
# psrinfo –a 3 online :- To make processor 3 online

How to breakroot passwd

#ok boot cdrom –s

TERM=ansi

Export TERM

Mkdir /demo

Mount /dev/dsk/c0t0d0s0 /demo
Vi /naveen/etc/passwd

Root:KHGHGHGGFG:-remove this junk passwd

Wq!

Reboot it logins with out asking for passwd

# Passwd – enter new password.

NIS Interview questions

How to create a nis master

#Domainname sun.com
#Echo sun.com > /etc/defaultdomain
#Vi /etc/hosts
Sun1 192.168.0.1
Sun2 192.168.0.2 add all the host connected to the network
Wq!

#Cp /etc/nsswitch.nis /etc/nsswitch.conf
#Vi /etc/nsswitch.conf
File nis – add this entry
Wq!

#Cd /var/yp
#Ypinit –m – master server
Enter host 192.168.0.1
Stop at errors say no here
Yes-yes-yes


#Cd /var/yp
#/usr/lib/netsvc/yp/ypstop
#/usr/lib/netsvc/yp/ypstart

Here you can see all the deamon get restarted .

Q)which command display the default nis server
A)ypwhich

Q)which command will display all the master and slave servers
A)Ypcat –k ypservers

Q)how to see the nis users
A)Ypcat passwd

Q)what are the deamons for nis master
A)Ypserv ypbind ypxfrd rpc.ypassword rpc.ypupdates.

Q)what is map
A)Map is table which consist of a key and a file which consist of the information of the key

Q)how to create nis slave server(192.168.0.2)

#Vi /etc/host
Sun1 192.168.0.1 master server ip
Wq!

#Cp /etc/nsswitch.nis /etc/switch.conf
#Vi nsswitch.con
Files nis
Wq!

#Domainname sun.com
#Echo sun.com> /etc/defaultdomain or to maintain more security /var/yp/ypservers

#Cd /var/yp
#Ypinit –s sun1 (master server)
Yes give master server ip or hostname
Error say no
Yes-yes

#Cd /var/up if u r in that directory then its okay
#/usr/lib/netsvc/yp/ypstop
#/usr/lib/netsvc/yp/ypstart
Ypserv ypbind

#Ypwhich – will display master and slave servers
#Ypcat –k ypservers – list master and slave server



Q)what are the daemon for nis slave server
A)Ypserv ypbind

Q)how to see nis user from the slave
A)Ypcat passwd

38)how to configure nis client

#Vi /etc/hosts
Sun1 192.168.0.1 - master
Sun2 192.168.0.2 – slave
Wq!

#Cp /etc/nsswitch.nis /etc/nsswitch.conf
#Domainname sun.com
#Echo sun.com > /etc/defaultdomain or /var/yp/ypservers

#Cd /var/yp
#Ypinit –c
Add the slave if available or master
Yes
No at errors

#Cd /var/yp
#/usr/lib/netsvc/yp/ypstop
#/usr/lib/netsvc/yp/ypstart

Daemons for nis clients are ypbind

Q)To see nis server and master from the client
A)Ypcat –k ypservers

Q)how do you update slave servers from the master
A)Cd /var/yp
#/usr/ccs/bin/make passswd
Or yppush.

Q)how to create nis+ server and client
#Cp /etc/passwd /export/home/nisfiles
#Cp /etc/group /export/home/nisfiles
#Cp /etc/hosts /export/home/nisfiles

#Vi auto_master
Remove all entries and keep only user name home directories
Host1 192.168.0.1:/export/home
Host2 192.168.0.1:/export/home
Wq!

#cp /etc/nsswitch.nis /etc/nsswitch.conf
Files nisplus
Wq!

#domainname sun.com
#echo sun.com>/etc/defaultdomain

#nisserver –r –Y
Reboot
#cd /export/home
#nspopulate -v –F

Q)how to create nis+ client

#cp /etc/nsswitch.nis /etc/nsswitch.conf
Files nisplus
Wq!

#domainname sun.com
#echo sun.com>/etc/defaultdomain

#nisclient –I –h 192.168.0.1 –d sun.com

Listing table & objects in NIS+
• #nisls ;Gives the total objects in NIS+
• #nisls org_dir ;Lists the tables listed in the directory.
Listing a contents of tables
• #niscat passwd.org_dir
Listing table structure
• #niscat -o passwd.org_dir ;lists structure of password table.
Adding A user
• #nistbladm -a name=john uid=123 gid=111 home=/home/john shell=/bin/sh passwd.org_dir
Changing the user information in passwd table (Super user only
Fill in the corresponding values in <>
• #nistbladm –a name=<> passwd=<> uid=<> gid=<> home=<> shelll=<> passwd.org_d
example
• #nistbladm -a name=john uid=123 gid=234 home=/home/john shell=/bin/sh passwd.org_di
to change only shell
• #nistbladm -m shell=/usr/local/bin/bash [name=john],passwd.org_dir
Changing user passwd


As root
• # nispasswd ;user has to update his key through

chkey -p
As user
• $ nispasswd ;update encrypted key
• $chkey -p ;(user NIS+ passwd and login passwd are the same.)

Adding user credentials
• #nisaddcred -p 123 -P john local
• # nisaddcred -p unix.123@planet.com -P john.planet.com. des
123 is userid and john is the user name.
Adding / removing a user dir entry in auto_home table :
• #nistbladm -a key=john value=10.20.30.40:/home/john auto_home.org_dir
• #nistbladm -r key=john auto_home.org_dir ;If key is not unique then more fields needs to be defined .
Removing a user
• #nistbladm -r name=john passwd.org_dir
Modifying the tables for multiple entries.
• #nisaddent -d passwd > /tmp/passwd ;Dump the table to a file
• #vi /tmp/passwd ;Edit the dumped file
• # nisaddent -r -f /etc/passwd passwd ;Put back the dumped file.
nisaddent command is available only for some of the standard tables, for others either nispopulate or nistbladm has to be used .

Booting process in solaris 10

The first question while giving any interview in solaris is what is the booting process in solaris or what is the boot phase in solaris.

Even to understand solaris clearly and to trouble shoot most of the things we need to know the booting process so that we can encounter where exactly is the problem we are getting.

The Boot Phases

The different phases of the boot process on SPARC-based systems are described here:

(I) Boot PROM phase.

The PROM displays the system identification information and then runs power-on self test (POST), which is a diagnostics routine that scans the system to verify the installed hardware and memory. POST runs diagnostics on hardware devices and builds a device tree, which is a data structure describing the devices attached to the system. After the completion of POST, the PROM loads the primary boot program bootblk.

(II) Boot programs phase.

The bootblk program loaded by PROM finds the secondary boot program ufsboot located in the UFS file system on the default boot device and loads it into the memory.

(III) Kernel initialization phase.

The ufsboot program loads the kernel into the memory. The kernel initializes itself and uses the ufsboot program to locate and load OS modules to control the system. A module is a piece of software with a specific functionality, such as interfacing with a particular hardware device. After loading enough modules to mount the root (/) file system, the kernel umnaps the ufsboot program and continues gaining control of the system. At the end of the kernel initialization phase, the kernel starts the /sbin/init process.

(IV) The init phase.

The init phase starts when, after initializing itself, the kernel starts the /sbin/init process, which in turn starts /lib/svc/bin/svc.startd to start the system services to do the following:

Check and mount file systems.

Configure network and devices.

Start various processes and perform tasks related to system maintenance.

The svc.startd process also executes run control (rc) scripts for backward compatibility. The steps in the boot process are illustrated in