Module 5: Unsupervised and Supervised Image Classification

 

    This map shows current land use in Germantown, Maryland. It was created using supervised classification in EREDAS. The distance image visual demonstrates which pixels may have the wrong classification. (Brighter pixels means greater distance from any of the signature classes.)

Exercise 1:


            This exercise covered unsupervised classification in ERDAS. This gave unique values to the raster pixels representing features in the image. I began this exercise by loading the image of the UWF campus in ERDAS. I had to change the color schemes options to “Red: 3, Green: 2, Blue 1.” I thought it was interesting you can change the settings to ensure the output is accurate. I changed the convergence threshold to 0.95 to ensure ERDAS continued the analysis till the pixels are 95% accurate. The explanations of those settings and others in the lab were very helpful. Otherwise, I would not have understood what changes I was making. The unsupervised classification created a thematic raster (with an attribute table) that has a lower spatial resolution. In order to add the image to the map I had to right click on the viewer, click “Open Raster Layer,” and navigate to where I told ERDAS to save the image. The output image does look similar but more washed out colors.

            The second task was to reclassify the 50 categories to 5 categories based on feature type. For now, I left the original image visible incase I needed it. I began my reclassification with buildings as Hopko suggested and continued to other classes. I renamed all the classes in this step with the full class name. I didn’t want to get confused with abbreviations. Using the trick to change the pixel color to a bright color was helpful. It allowed me to see where the rest of the class was in the image. I didn’t find using any of the tools to view both images listed in the instructions useful. I just checked/unchecked the classified image to see the one underneath.

Next, I used the “Recode” feature from the Raster Tab to recode the classes into 5 new classes. My attribute table already had “Class Names” column so I just recolored/renamed them and added an area column. Finally, I calculated the area of permeable and impermeable surfaces in the image. I estimated the Mixed classification to be 60% impermeable and 40% permeable. The Shadows class I estimated to be 10% impermeable and 90% permeable.

Exercise 2:

            The second exercise in Module 5 covered collecting spectral signatures for supervised classification. Other steps of supervised classification, (evaluating signatures and classifying images) were taught in different exercises. In supervised classification, I created the class types for the computer to recognize features’ spectral characteristics. What keeps striking me about ERDAS is all the tasks it can perform but it isn’t user friendly. The lab instructions are vital because the step to perform some tasks (even basic ones) are not intuitive in ERDAS.

            First, I added the Gray’s Harbor image and the previously captured signatures file. In this exercise, I added more signatures to this file in order for the image to be appropriately classified. I did get multiple errors when adding the signatures file like the instructions predicted. After examining the signature file, I removed the layer. The polygons in the signature file had a white dashed line boarder and areas excluded from the polygon where a white/black dashed line. I started adding a signature with a polygon around part of a mountain lake. I used the inquire cursor tool to locate the lake’s coordinates then added a polygon around a portion of the lake pixels. Next, I added the polygon signature to the signature editor and renamed it. I repeated this process for wetlands, coniferous forest, and mixed forest. This part of the lab was not difficult as it is similar to digitizing polygons in ArcPro.

            The next task was to create signatures from “seed.” In the region growing properties the instructions covered important topics of spectral Euclidean distance and neighborhood. My understanding is the spectral Euclidian distance is the variation in DN that the computer will accept as part of the region. The neighborhood is what neighboring pixels the computer will look at to determine the region. These two setting need to be adjusted for each area of interest. Using the region growing properties window, I created signatures for deciduous forest, agriculture, grass, emergent growth, urban, and clear-cut/bare soil.

Comments