Monday, February 14, 2011

Rolling Back a Root Pool Snapshot From a Failsafe Boot

This procedure assumes that existing root pool snapshots are available. In this example, the root pool snapshots are available on the local system

# zfs snapshot -r rpool@0730
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 6.87G 60.1G 37K /rpool
rpool@0730 18K - 37K -
rpool/ROOT 3.87G 60.1G 18K legacy
rpool/ROOT@0730 0 - 18K -
rpool/ROOT/zfs1008BE 3.87G 60.1G 3.82G /
rpool/ROOT/zfs1008BE@0730 52.3M - 3.81G -
rpool/dump 1.00G 60.1G 1.00G -
rpool/dump@0730 16K - 1.00G -
rpool/export 52K 60.1G 19K /export
rpool/export@0730 15K - 19K -
rpool/export/home 18K 60.1G 18K /export/home
rpool/export/home@0730 0 - 18K -
rpool/swap 2.00G 62.1G 16K -
rpool/swap@0730 0 - 16K -

1. Shutdown the system and boot failsafe mode.

ok boot -F failsafe
Multiple OS instances were found. To check and mount one of them
read-write under /a, select it from the following list. To not mount
any, select 'q'.

1 /dev/dsk/c0t1d0s0 Solaris 10 xx SPARC
2 rpool:5907401335443048350 ROOT/zfs1008

Please select a device to be mounted (q for none) [?,??,q]: 2
mounting rpool on /a

2. Roll back the individual root pool snapshots

# zfs rollback rpool@0730
# zfs rollback rpool/ROOT@0730
# zfs rollback rpool/ROOT/zfs1008BE@0730
# zfs rollback rpool/export@0730
.

Current ZFS snapshot rollback behavior is that recursive snapshots are not rolled back with the -r option. You must roll back the individual snapshots from the recursive snapshot.

3. Reboot back to multiuser mode
# init 6

Solaris Flash archive

Solaris Flash archive can be used to backup a complete system or install from new machines from the archives [cloning a system]. With Flash installation, you can create a single reference installation of the Solaris OS on one machine & that can be replicated as a new installation on any number of systems, called the Clones. This document shows how to create a flash archive image.

After you install the system with all packages you need and the data you need in it, you create the Flash archive. All the files on the system are copied to the archive along with various pieces of identifying information. Command "flarcreate" is used to create the archive. The flarcreate command requires the -n name option and a file name for the archive.

Example: The -n options specifies the server name and -c option enables data compression

# flarcreate -n mobile_server -c /export/home/mobilebackup.flar

Determining which filesystems will be included in the archive...
Determining the size of the archive...
The archive will be approximately 3.12GB.
Creating the archive...
8713899 blocks
Archive creation complete.

ZFS : Basic administration guide

ZFS Pool:

ZFS organizes physical devices into logical pools called storage pools. Both individual disks and array logical unit numbers (LUNs) that are visible to the operating system can be included in a ZFS pools. These pools can be created as disks striped together with no redundancy (RAID 0), mirrored disks (RAID 1), striped mirror sets (RAID 1 + 0), or striped with parity (RAID Z). Additional disks can be added to pools at any time but they must be added with the same RAID level.

ZFS Filesystem :

ZFS offers a POSIX-compliant file system interface to the Solaris/OpenSolaris operating system. ZFS file systems must be built in one and only one storage pool, but a storage pool may have more than one defined file system. ZFS file systems are managed & mounted through /etc/vfstab file. The common way to mount a ZFS file system is to simply define it against a pool. All defined ZFS file systems automatically mount at boot time unless otherwise configured.

Here are the basic commands for getting started with ZFS

Creating Storage pool using "zpool create" :

bash-3.00# zpool create demovol raidz c2t1d0 c2t2d0
bash-3.00# zpool status
pool: demovol
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
demovol ONLINE 0 0 0
raidz1 ONLINE 0 0 0
c2t1d0 ONLINE 0 0 0
c2t2d0 ONLINE 0 0 0

errors: No known data errors
bash-3.00#

"zfs list" will give the details of the pool and other zfs filesytems.

bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
demovol 1.00G 900G 38.1K /demovol
bash-3.00#

Creating File Systems : "zfs create" is used to create zfs filesytem.

bash-3.00# zfs create demovol/testing
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
demovol 1.00G 900G 38.1K /demovol
demovol/testing 32.6K 900G 32.6K /demovol/testing
bash-3.00#

bash-3.00# ls /dev/zvol/dsk/demovol -- This should show you the disk file.

Setting Quota for the filesytem : Until Quota is set, the filesytem shows the total available space of the containter zfs pool.

bash-3.00# zfs set quota=10G emspool3/testing
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
demovol 1.00G 900G 39.9K /demovol
demovol/testing 32.6K 10.0G 32.6K /demovol/testing

Creating a snapshot :

bash-3.00# zfs snapshot demovol/testing@snap21
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
demovol 1.00G 900G 39.9K /demovol
demovol/testing 32.6K 10.0G 32.6K /demovol/testing
demovol/testing@snap21 0 - 32.6K -
bash-3.00#

Get all properties of a ZFS filesytem :

bash-3.00# zfs get all demovol/testing
NAME PROPERTY VALUE SOURCE
demovol/testing type filesystem -
demovol/testing creation Mon Feb 9 9:05 2009 -
demovol/testing used 32.6K -
demovol/testing available 10.0G -
demovol/testing referenced 32.6K -
demovol/testing compressratio 1.00x -
demovol/testing mounted yes -
demovol/testing quota 10G local
demovol/testing reservation none default
demovol/testing recordsize 128K default
demovol/testing mountpoint /demovol/testing default
..


Cloning a ZFS filesystem from a snapshot :

bash-3.00# zfs clone demovol/testing@snap21 demovol/clone22
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
demovol 1.00G 900G 39.9K /demovol
demovol/clone22 0 900G 32.6K /demovol/clone22
demovol/testing 32.6K 10.0G 32.6K /demovol/testing
demovol/testing@snap21 0 - 32.6K -
bash-3.00#

Performance IO Monitoring the ZFS storage pool:

bash-3.00# zpool iostat 1
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
demovol 4.95M 900G 0 0 0 35
demovol 4.95M 900G 0 0 0 0
demovol 4.95M 900G 0 0 0 0
demovol 4.95M 900G 0 0 0 0

Please refer to the man pages, zfs and zpool, for more detailed information. Additional

Solving Mirrored {Root} Pool Problems (zpool attach)

If you cannot attach a disk to create a mirrored root or non-root pool with the zpool attach command, and you see messages similar to the following:

# zpool attach rpool c1t1d0s0 c1t0d0s0
cannot attach c1t0d0s0 to c1t1d0s0: new device must be a single disk

You might be running into CR 6852962, which was been seen in an LDOM environment.If the problem is not 6852962 and the system is booted under a virtualization product, make sure the devices are accessible by ZFS outside of the virtualization product

Creating a Pool or Attaching a Disk to a Pool (I/O error)
If you attempt to create a pool or attach a disk or a disk slice to a existing pool and you see the following error

# zpool attach rpool c4t0d0s0 c4t1d0s0
cannot open '/dev/dsk/c4t1d0s0': I/O error

This error means that the disk slice doesn't have any disk space allocated to it or possibly that a Solaris fdisk partition and the slice doesn't exist on an x86 system. Use the format utility to allocate disk space to a slice. If the x86 system doesn't have a Solaris fdisk partition, use the fdisk utility to create one.


Panic/Reboot/Pool Import Problems
During the boot process, each pool must be opened, which means that pool failures might cause a system to enter into a panic-reboot loop. In order to recover from this situation, ZFS must be informed not to look for any pools on startup.

Boot From Milestone=None Recovery Method
Boot to the none milestone by using the -m milestone=none boot option

ok boot -m milestone=none

Remount your root file system as writable.
Rename or move the /etc/zfs/zpool.cache file to another location.

These actions cause ZFS to forget that any pools exist on the system, preventing it from trying to access the bad pool causing the problem. If you have multiple pools on the system, do these additional steps:

* Determine which pool might have issues by using the fmdump -eV command to display the pools with reported fatal errors.
* Import the pools one-by-one, skipping the pools that are having issues, as described in the fmdump output.

If the system is back up, issue the svcadm milestone all command.

ZFS Mirrored Root Pool Disk Replacement

If a disk in a mirrored root pool fails, you can either replace the disk or attach a replacement disk and then detach the failed disk. The basic steps are like this:

Identify the disk to be replaced by using the zpool status command.
You can do a live disk replacement if the system supports hot-plugging. On some systems, you might need to offline and unconfigure the failed disk first. For example:

# zpool offline rpool c1t0d0s0
# cfgadm -c unconfigure c1::dsk/c1t0d0
Physically replace the disk.

Reconfigure the disk. This step might not be necessary on some systems.

# cfgadm -c configure c1::dsk/c1t0d0
Confirm that the replacement disk has an SMI label and a slice 0 to match the existing root pool configuration.
Let ZFS know that the disk is replaced.

# zpool replace rpool c1t0d0s0
Bring the disk online.
# zpool online rpool c1t0d0s0
Install the bootblocks after the disk is resilvered.
Confirm that the replacement disk is bootable by booting the system from the replacement disk.

Saturday, February 12, 2011

questions with answers

Solaris Questions:
How to view the kernel (shmmax)parameter value in solaris 10? (sysdef -i)
What are the main difference between solaris 10 & 9?
The main difference in solaris 9 & solaris 10 is "SMF (Solaris management Facility)" . In solaris 9, if any service goes down then we should restart all services.this is the disadvantage. But in solaris 10, if any service goes down then that particular service we can select and enable it instead of restarting all services.
============================================================
The automount facility contains three components
The AutoFS FS
The automountd daemon
The automount command

The AutoFs map types

Master Map The auto_master map associates a directory, also called a mount point, with a map.
Direct Map Lists the mount points as absolute path names. This map explicitly indicates
the mount point on the client.

Indirect Map Lists the mount points as relative path names. This map uses a relative path to
establish the mount point on the client.
Special Provides access to NFS servers by using their host names
===============================================================================

What is the command to do an interactive boot from the ok prompt?
stop+a command is to boot an interactive boot from the ok prompt.

Which NFS daemons are found on the NFS server?
In NFS server side there are 4 daemons They are nfsd
mountd
lockd
statd
nfslogd

These five daemons will be in NFS server.
statd and lockd will be in NFS client too
.

What file controls system wide password aging?
/etc/shadow.

What command will display the VTOC for disk c0t0d0s0?
prtvtoc /dev/rdsk/c0t0d0s0

Where are the templates stored that are copied into the user's home directories for their personal customizations?
/etc/skel

How do we know how many LAN cards we have in server?

Give the command that will display your default boot device?
eeprom boot-device
Solaris Questions:
How to view the kernel (shmmax)parameter value in solaris 10? (sysdef -i)
What are the main difference between solaris 10 & 9?
The main difference in solaris 9 & solaris 10 is "SMF (Solaris management Facility)" . In solaris 9, if any service goes down then we should restart all services.this is the disadvantage. But in solaris 10, if any service goes down then that particular service we can select and enable it instead of restarting all services.
============================================================
The automount facility contains three components
The AutoFS FS
The automountd daemon
The automount command

The AutoFs map types

Master Map The auto_master map associates a directory, also called a mount point, with a map.
Direct Map Lists the mount points as absolute path names. This map explicitly indicates
the mount point on the client.

Indirect Map Lists the mount points as relative path names. This map uses a relative path to
establish the mount point on the client.
Special Provides access to NFS servers by using their host names
===============================================================================

What is the command to do an interactive boot from the ok prompt?
stop+a command is to boot an interactive boot from the ok prompt.

Which NFS daemons are found on the NFS server?
In NFS server side there are 4 daemons They are nfsd
mountd
lockd
statd
nfslogd

These five daemons will be in NFS server.
statd and lockd will be in NFS client too
.

What file controls system wide password aging?
/etc/shadow.

What command will display the VTOC for disk c0t0d0s0?
prtvtoc /dev/rdsk/c0t0d0s0

Where are the templates stored that are copied into the user's home directories for their personal customizations?
/etc/skel

How do we know how many LAN cards we have in server?

Give the command that will display your default boot device?
eeprom boot-device
prtconf -vp |grep -i boot


What command can you use to dispaly all of your groups
groups - To display full list

What is the command can reconfigure devices without reboot?
devfsadm

What are the different phases in boot process?
Boot Phases of solaris operating environment are:
1. Boot PROM
2. boot programs like bootblk, ufsboot
3.kernel initialization like loading modules
4. init phase.

How many cpu's we can connect to a sprac machines?
Sin Fire 15k can have upto max of 106 processors

List the hidden files in current directory?
ls -al |grep "^\."

How to find 32 or 64 bit system instances of OS?
isainfo -b

How to configure IP Multipathing.?

How would you find out what version of the solaris is currently running?
uname -r is the command to know the version of the OS and uname -s for the type of OS.

How do you find out drive statistics ? – iostat -E
Display Ethernet Address arp table ? – arp -a
Display the inter-process communication facility status ? – ipcs
Alternative for top command ? - prstat -a
Display the top most process utilizing most CPU ? - top

Given an ISO image how you mount in solaris ?
Create a loopback device file with lofiadm:

acadie# /usr/sbin/lofiadm -a /path/to/image.iso
This will create, for example, /dev/lofi/1 . It can be mounted as follows:
acadie# mount -F hsfs -o ro /dev/lofi/1 /mnt/dir
------
lofiadm -a /export/temp/software.iso /dev/lofi/1
The lofi device creates a block device version of a file. This block device can be mounted to /mnt with the following command.
mount -F hsfs -o ro /dev/lofi/1 /nt
=========================================================================================================

ZFS
Which command(s) would you use to setup the ZFS file system?
zfscreate
FEATURES OF ZFS
How to create ZFS File System
How to create ZFS snapshot
Restore from a ZFS snapshot
===================================================================================

ZONES :
Explain Zone Features & types of Zones
Features of Global Zone
1.Solaris Always boots(cold/warm) to the global zone.
2.Knows about All Hardware devices attach to system
3.Knows about all non global Zones
Features of Non-Global Zones.
1.Installed at a location on the filesystem of the Global Zone
'Zone root path' /export/home/zones/zones1 {Zone2,Zone3----} this is as root directory for this zones.
2.Share Packages with Global Zone.
3.Manage distinct hostname and table files.
4.cannot communicate with other non-global zones by default.NIC must be used, which means use standard network API(TCP)
5.Global Zone admin can delegatenon-global zone administration

Zones Types:-

1.sparse Root Zones - share key fileswith global zones.
2.Whole Root Zones - require more storage


Zone Daemons and its functions
Where is zone configuration location?
In zones what is Whole root model & Sparse root model?
Creating a zone (sparse root)
Cloning a Zone
=======================================================================
Cloning works only from release 11/06.
Assume that a zone called apple is created and we are cloning to a new zone called orange. To clone the apple zone , the apple zone must be in the installed state.
#zoneadm –z apple halt
#zonecfg –z apple export –f /etc/zones/apple.txt ? Copies the apples configuration to a text file.
#vi /etc/zones/apple.txt ? edit the zonepath and give a new path for the new zone named orange and issue a new ipaddress and save the file
#zonecfg –z orange –f /etc/zones/apple.txt
#zoneadm –z orange clone apple
#zoneadm –z orange boot
#zlogin –C orange
The same data and users are found in the orange zone like what we created in apple zone

===============================================================================================

What command can you use to dispaly all of your groups
groups - To display full list

What is the command can reconfigure devices without reboot?
devfsadm

What are the different phases in boot process?
Boot Phases of solaris operating environment are:
1. Boot PROM
2. boot programs like bootblk, ufsboot
3.kernel initialization like loading modules
4. init phase.

How many cpu's we can connect to a sprac machines?
Sin Fire 15k can have upto max of 106 processors

List the hidden files in current directory?
ls -al |grep "^\."

How to find 32 or 64 bit system instances of OS?
isainfo -b

How to configure IP Multipathing.?

How would you find out what version of the solaris is currently running?
uname -r is the command to know the version of the OS and uname -s for the type of OS.

How do you find out drive statistics ? – iostat -E
Display Ethernet Address arp table ? – arp -a
Display the inter-process communication facility status ? – ipcs
Alternative for top command ? - prstat -a
Display the top most process utilizing most CPU ? - top

Given an ISO image how you mount in solaris ?
Create a loopback device file with lofiadm:

acadie# /usr/sbin/lofiadm -a /path/to/image.iso
This will create, for example, /dev/lofi/1 . It can be mounted as follows:
acadie# mount -F hsfs -o ro /dev/lofi/1 /mnt/dir
------
lofiadm -a /export/temp/software.iso /dev/lofi/1
The lofi device creates a block device version of a file. This block device can be mounted to /mnt with the following command.
mount -F hsfs -o ro /dev/lofi/1 /nt
=========================================================================================================

ZFS
Which command(s) would you use to setup the ZFS file system?
zfscreate
FEATURES OF ZFS
How to create ZFS File System
How to create ZFS snapshot
Restore from a ZFS snapshot
===================================================================================

ZONES :
Explain Zone Features & types of Zones
Features of Global Zone
1.Solaris Always boots(cold/warm) to the global zone.
2.Knows about All Hardware devices attach to system
3.Knows about all non global Zones
Features of Non-Global Zones.
1.Installed at a location on the filesystem of the Global Zone
'Zone root path' /export/home/zones/zones1 {Zone2,Zone3----} this is as root directory for this zones.
2.Share Packages with Global Zone.
3.Manage distinct hostname and table files.
4.cannot communicate with other non-global zones by default.NIC must be used, which means use standard network API(TCP)
5.Global Zone admin can delegatenon-global zone administration

Zones Types:-

1.sparse Root Zones - share key fileswith global zones.
2.Whole Root Zones - require more storage


Zone Daemons and its functions
Where is zone configuration location?
In zones what is Whole root model & Sparse root model?
Creating a zone (sparse root)
Cloning a Zone
=======================================================================
Cloning works only from release 11/06.
Assume that a zone called apple is created and we are cloning to a new zone called orange. To clone the apple zone , the apple zone must be in the installed state.
#zoneadm –z apple halt
#zonecfg –z apple export –f /etc/zones/apple.txt ? Copies the apples configuration to a text file.
#vi /etc/zones/apple.txt ? edit the zonepath and give a new path for the new zone named orange and issue a new ipaddress and save the file
#zonecfg –z orange –f /etc/zones/apple.txt
#zoneadm –z orange clone apple
#zoneadm –z orange boot
#zlogin –C orange
The same data and users are found in the orange zone like what we created in apple zone

===============================================================================================

NFS server configuration on solaris 10

The network file system (NFS)

NFS iss the system that can be used to access file systems over the network. NFS version 4 is the default NFS in Solaris 10. The NFS service is managed by the Service Management Facility. That means NFS can be managed (enabled, disabled, or restarted) by the svcadm command, and the status of NFS service can be obtained by using the svcs command. The benefit here is sharing files over the network among computers possibly running different operating systems.

The NFS Service

The NFS service is a network service that enables computers of different architectures running different operating systems to share file systems across the network. A wide spectrum of operating systems ranging from Windows to Linux/UNIX support NFS. It has become possible to implement the NFS environment on a variety of operating systems because it is defined as an abstract model of a file system, rather than an architectural specification. Each operating system applies the NFS model to its specific file system semantics. This means that file system operations such as reading and writing work for the users as if they were accessing a file on the local system.

The benefits of the NFS service are described here:

1) It enables users on the network to share data, because all computers on the network can access the same set of files.

2) It reduces storage costs by letting computers share applications and common files instead of needing local disk space on each computer for each common file and user application.

3) It provides data consistency and reliability, because all users can read the same set of files, and whenever changes are made, they are made only at one place.

4) It makes the mounting of file systems accessing the remote files transparent to users.

5) It supports heterogeneous environments and reduces system administration overhead.

NFS is a network service offered in the client/server environment

NFS Servers and Clients

The NFS is a client/server system, the terms client and server refer to the roles that computers assume in sharing resources (file systems in this case) on the network. In NFS, computers that make their file systems available over the network and thereby offer NFS service to serve the requested files are acting as NFS servers, and the computers that are accessing the file systems are acting as NFS clients. In the NFS framework, a computer on a network can assume the role of a client, a server, or both.

Here is how NFS works:

A server makes a file system on its disk available for sharing, and the file system can then be accessed by an NFS client: on the network.

A client accesses files on the server's shared file system by mounting the file system.

The client does not make a copy of the file system on the server; instead, the mounting process uses a series of remote procedure calls that enable the client to access the file system transparently on the server's disk. To the user, the mounting works just like a mount on the local machine.

Once the remote file system (on the server) is mounted on the client machine, the user types commands as though the file systems were local.

You can mount an NFS file system automatically with autoFS.

The NFS File Systems

In most UNIX system environments, a file hierarchy that can be shared by using the NFS service corresponds to a file system or a portion of a file system. However, a file system resides on a single operating system, and NFS support works across operating systems. Moreover, the concept of a file system might be meaningless in some non-UNIX environments. Therefore, the term file system in NFS refers to a file or a file hierarchy that can be shared and mounted in the NFS environment.

An NFS server can make a single file or a directory subtree (file hierarchy) available to the NFS service for sharing. A server cannot share a file hierarchy that overlaps with a file hierarchy that is already being shared. Note that peripheral devices such as modems and printers cannot be shared under NFS.

Managing NFS

Since the release of Solaris 9, the NFS server starts automatically when you boot the system. Nevertheless, you do need to manage NFS, which includes administering the NFS service, working with NFS daemons, and making file systems available for sharing.

Administering the NFS Service

When the system is booted, the NFS server is automatically started by executing the nfs.server scripts. However, when the system is up, you may need to stop the service or start it again for whatever reason without rebooting the system. For that, you need to know that the NFS service is managed by the Service Management Facility (SMF) under the identifier network/nfs/server. By means of this identifier, you can find the status of the service by using the svcs command, and you can start (enable) or stop (disable) the service by using the svcadm command.

You can determine whether the NFS service is running on your machine by issuing the command shown here:

# svcs network/nfs/server

This command displays whether the NFS service is online or disabled. If you want to stop (disable) the service, issue the following command:

# svcadm disable network/nfs/server

You can start the service by issuing the following command:

# svcadm enable network/nfs/server

When the system is up, some daemons are running to support the NFS service.

Working with NFS Daemons

Since the release of Solaris 9, NFS service starts automatically when the system is booted. When the system goes into level 3 (or multiuser mode), several NFS daemons are started to support the service.

Daemons automatically started in NFS version 4 when the system boots Daemon

Description

automountd - Handles mount and unmount requests from the autofs service.

nfsd - Handles file system requests from clients.

nfs4cbd - Manages the communication endpoints for the NFS version 4 callback program.

nfsmapid - Provides integer-to-String and string-to-integer conversions for the user ID (UID) and the group ID (GID).

The nfsd daemon handles the file system requests from the client and is automatically started with option -a. You can change the parameters of the command by editing the /etc/default/nfs file. The syntax for the nfsd command is as follows:

nfsd [-a] [-c {#_conn}] [-l {listenBacklog}] [-p {protocol}] [-t {device}]
[{nservers}]

The options and parameters are described here:

-a. Start the daemon over all available connectionless and connection-oriented transport protocols such as TCP and UDP. This is equivalent to setting the NFSD_PROTOCOL parameter in the nfs file to ALL.

-c (#_conn.) Set the maximum number of connections allowed to the NFS server over connection-oriented transport protocols such as TCP. By default, the number is unlimited. The equivalent parameter in the nfs file is NFSD_MAX_CONNECTIONS.

-l (listenBacklog). Set the connection queue length (specified by (listenBacklog)) for the number of entries for the NFS TCP. The default value is 32. This number can also be determined by setting the NFSD_LISTEN_BACKLOG parameter in the nfs file.

-p (protocol). Start the daemon over the protocol specified by (protocol). The default in NFS version 4 is TCP. The equivalent parameter in the nfs file is: NFSD_PROTOCOL.

-t (device). Start an nfs daemon for the transport specified by . The equivalent parameter in the nfs file is: NFSD_DEVICES.

(nservers). Set the maximum number of concurrent requests from the clients that the NFS server can handle. The equivalent parameter in the nfs file is: NFSD_SERVERS.


The default NFS version is version 4 in Solaris 10, Unlike previous versions of NFS, NFS version 4 does not use these daemons: lockd, mountd, nfslogd, and statd

Sharing File Systems

On the server machine, you can make a file system available for sharing by using the share command on the machine. You can use this command manually for testing purpose or to make a file system available only until the system is rebooted. If you want to make the sharing of a file system permanent and automatic, you should enter the share command into the /etc/dfs/dfstab file. Each entry of this file is a share command, and this file is automatically executed at boot time when the system enters run level 3. The syntax for the share command is shown here:

share [-F (FSType)] [-o (specificOptions)] [-d (description)] [(pathname)]

The options are described here:

-F (FSType). Specifies the file system type, such as nfs.

-o (specificOptions). The (specificOptions) specifies the options for controlling access to the shared file system. The possible values for (specificOptions) are as follows:

rw. Read/write permissions for all clients. This is the default behavior.

rw = (client1):(client2). . . . Read/write permission for the listed clients; no access for any other client.

ro. Read-only permission for all clients.

ro = (client1):(client2). . . . Read-only permission for the listed clients; no access for any other client.

-d (description). The (description) specifies the description for the shared resource.

If you want to know the resources being shared on the local server, issue the dfshares command without any arguments or options.


Files related to the NFS service

/etc/default/autofs - Configuration information for autofs.

/etc/default/fs - Lists the default file system type for local file systems.

/etc/default/nfs - Configuration information for the nfsd daemon.

/etc/dfs/dfstab - Contains a list of local resources to be shared; the share commands.

/etc/mnttab - Lists file systems that are currently mounted.

/etc/dfs/sharetab - Lists the local and remote resources that are shared.

/etc/vfstab - Defines file systems to be mounted locally.

Some Examples how to share files in NFS

# vi /etc/dfs/dfstab

share -F nfs -o ro,anon=0 /cdrom/sol_10_305_sparc/s0/Solaris_10/Tools - to share the cdrom OS software and read only permission.

share -F nfs -o rw,anon=0 /cdrom - to share files with read and write permission and anon=0 means access to all hosts.

share -F nfs -o rw=hostname1 /cdrom - to give access to only one host.

share -F nfs -o rw=-hostname1 /cdrom - to deny this hostname1 and access to all.

share -F nfs -o rw=hostname1 hostname2 /cdrom - access to hostname1 and hostname2

share -F nfs -o rw=-hostname1 -hostname2 /cdrom - deny hostname1 and hostname2 and allow access to all computer in the network.

wq!

# Shareall (or)
#exportfs -va - to export the filesystem

#share - to see the files shared in nfs and which are exported currently

Client side mount the shared File system

# mount -f nfs hostname1:/cdrom /cdrom - mount shared file directory to local directory.

# cd /cdrom
# ls