?

Quantification of Cranial Asymmetry in Infants by Facial Feature Extraction

2014-03-24 05:40ChunMingChangWeiChengLiChungLinHuangandPeiYehChang

Chun-Ming Chang, Wei-Cheng Li, Chung-Lin Huang, and Pei-Yeh Chang

Quantification of Cranial Asymmetry in Infants by Facial Feature Extraction

Chun-Ming Chang, Wei-Cheng Li, Chung-Lin Huang, and Pei-Yeh Chang

——In this paper, a facial feature extracting method is proposed to transform three-dimension (3D) head images of infants with deformational plagiocephaly for assessment of asymmetry. The features of 3D point clouds of an infant’s cranium can be identified by local feature analysis and a two-phasek-means classification algorithm. The 3D images of infants with asymmetric cranium can then be aligned to the same pose. The mirrored head model obtained from the symmetry plane is compared with the original model for the measurement of asymmetry. Numerical data of the cranial volume can be reviewed by a pediatrician to adjust the treatment plan. The system can also be used to demonstrate the treatment progress.

Index Terms——Cranial asymmetry, deformational plagiocephaly,facial feature, image registration.

1. Introduction

Cranial asymmetry and deformation is commonly seen in infants. This deformity results from external forces that mold the skull of a baby before or after birth. Studies have shown an increase in the occurrence of deformity in infants[1]. The cause can be traced to the “back to sleep”campaign, which advises the supine infant positioning to reduce the risk of sudden infant death syndrome (SIDS). The incidences are usually treated non-surgically. The best time to look for treatment is before the cranium gradually forms up in the first two years. Infants with such abnormal head shapes might confront social problems in their lives when they grow up. Researchers also suggested that these infants were delayed in neurocognitive development compared with their peers[2],[3].

To understand the remedy progress during a treatment program, a quantitative assessment of a head shape isnecessary. There have been several methods for quantifying cranial asymmetry proposed in [4] and [5]. The head shape analysis was first assessed by observation or palpation. Later, several measurement devices[6],[7]were developed for patient’s head circumference contour which was recorded by a head ring. With the advance of technology, threedimension (3D) photogrammetry and photograph techniques were also reported[8]. The advantages of accuracy and safety make the imaging system a promising tool for evaluating the treatment of plagiocephaly.

To quantify cranial asymmetry, a simple, non-invasive method was presented by Changet al.[7]. The authors recorded and analyzed a permanent ring of the head circumference. An asymmetry index was defined for measurement. Hutchisonet al.used digital photographs of a head circumference band and a flexicurve ruler to measure head shapes and adopted the oblique cranial length ratio[9]. Atmosukartoet al.defined the shape descriptor for the head and then proposed a shape severity quantification and localization method[5]. A statistical model of asymmetry using principal components analysis was developed by Lancheet al.[10].

In this paper, we analyze the head models and propose a two-phase classification method to register two models. The overall height difference criterion is used to compare the registered head models at distinct hospital visits. The illustrated results provide quantitative information for the assessment of deformation and growth of head shapes. The system of the proposed approach is shown in Fig. 1.

Fig. 1. System flow diagram of proposed method.

2. Locating Facial Features

In our 3D head database, the head shape and pose of a patient vary at each hospital visit. Matching two models via whole point clouds directly will not result in satisfactoryregistration. Registration based on partial invariant features is a possible way. Since we are only interested in the head model, the datasets are preprocessed to remove some disturbance portions and leave the head data only. Facial features are the landmarks on one’s face; they provide important cues while registering two objects. To locate facial features, the surface variation which represents the local surface property provides significant cues.

2.1 Covariance Analysis

The data sets obtained from a 3D imaging system are point clouds. The covariance matrix of a local neighborhood can be used to describe the local surface properties[11]. To search for the neighbors of a point, K-NN (knearest neighbors)[12]method is applied in our approach. After deciding neighbors of an arbitrary pointp, the covariance matrix of pointpcan be expressed as

wherepjis the set ofkneighbors ofp,is the center of mass of the neighbors, andNpis the set of neighbors ofp. C is then a symmetric and positive semi-definite matrix.

Let three real values0λ,1λ, and2λbe the eigenvalues of C in three dimensional case and satisfy the conditionλ0≤λ1≤λ2. The surface variation of pointp, denotedσ(p), is defined as[11]

A thresholdis set to filter out the insignificant points while those points with big surface variation values are kept. The remaining point set is defined as

Fig. 2 (a) and Fig. 2 (b) illustrate the point clouds before and after processing, respectively. The important features, such as eyes, nose, ears, and an irregular shape on the top of an infant’s head (an artifact induced while building the 3D model), are kept.

Fig. 2. Cranial point clouds: (a) original data set and (b) filtered point setQtusing surface variation.

2.2 Two-Phasek-Means Classification

Although all the important features of the head model stay in the new data setQt, we cannot tell which organ each cluster belongs to. Then, we need to distinguish the features and locate the positions of facial features by using a two-phasek-means algorithm.

A. Phase One

First, the point setis divided into four clusters including the top of the head, left ear, right ear, and the face part. The face part contains eyes, nose, and mouth, which are so close that they are considered as the same group temporarily. To avoid the situation that the points of the same organ might be classified into different clusters, we place a constraint on the distance between the centers of clusters, expressed as

wherecjis the center of a cluster andcτis a specified number. Fig. 3 (a) illustrates the result.

The four centers of clusters (ctop,cface,cear,left, andcear,right) form a triangular pyramid that consists of three triangles as its sides and one triangle as its bottom, as shown in Fig. 3 (b). Note thatcear,meanis the middle ofcear,leftandcear,right. The locations of left ear, right ear, and the top of the head can be decided through the geometric relationship.

Fig. 3. Phase one classification: (a) the result of clustering and (b) the pyramid form by the centers of clusters.

B. Phase Two

Next, we are going to distinguish the features in the face cluster. The point set of face part is divided into three clusters (left eye, right eye, and nose-mouth) by applying with differentkvalues and thresholds. The result is illustrated in Fig. 4. The center of each cluster is marked with a dot.

The left eyeceye,leftand the right eyeceye,rightare identified by the shortest distance to the corresponding ear centers. Due to the possible shape change while capturing the images, the mouth cluster is not considered as a feature.

We only need to locate the nose. Let the vector nnosebe the direction fromcear,meanto the middle of the eyes (ceye,mean). A tangent plane which is normal to nnoseis generated. The center of nosecnoseis specified as the intersection of the tangent plane and the nose cluster.

Fig. 4. Result of the phase-twok-means clustering: (a) three clusters are separated and (b)ceye,rightis defined as the intersection of the tangent plane and the nose cluster.

3. Pose Alignment

Infants will not hold still their heads for long periods of time while the cameras are capturing the images. In order to register the head models which are captured for different infants at different time, we use the locations of facial landmarks to automatically align different poses.

The head model will be first rotated and translated to a coarse common pose as the initial position. The transformations of head models are needed to accelerate the convergence speed and the accuracy of matching. We need to define a head center as the origin and the corresponding coordinates for transformation, as shown in Fig. 5. Let two vectors originated fromceye,meantocear,leftandcear,rightbe celand cer, respectively. The vector n1can be obtained by:

Letc0denote the perpendicular foot of the vector n2. The vector originated fromc0toceye,meanis normal to n1and the vector originated fromcear,lefttocear,right. The n2is determined byceye,meanandc0as

Fig. 5. New coordinate system for transformation of head model.

Now, we have a new 3D Cartesian coordinate system withc0as the origin and n1as well as n2as the coordinates. The third coordinate can be decided by the cross product of n1and n2. Here, we select the face to be in a frontal position. The head models can be then transformed to face the front.

4. Asymmetry Measurement

It is well known that the body plans of most animals, including humans, exhibit mirror symmetry. A normal head shape is also approximately bilaterally symmetrical. Based on this characteristic, we can measure the cranial asymmetry of an infant. We select the plane which passes through the midpoint of eye centers and with a normal vector from center of right eye to the center of left eye as the symmetry plane, as shown in Fig. 6 (a). Fig 6 (b) demonstrates a mixed model with the original model superimposed by its mirrored model.

Fig. 6. Symmetric models: (a) sagittal plane for reflection and (b) a head model superimposed with its mirrored model.

This is shown in Fig. 7.

Fig. 7. Illustration ofrandθ.

To measure the asymmetry, we utilize the height difference (HD)[13]of both models:

wherer′ is the distance of the corresponding mirroredmodel with angleθon theX-Zplane. To compute the difference with a specified region, Overall height difference (OHD) is used. It is defined as[13]

whereNpis the number of points in the interest region.

5. Experimental Results

The database of 3D models that we use in this study is maintained by the researchers at Chang Gung Memorial Hospital. The database contains 15 subjects. Each subject has 4 data sets for the follow-up visits (0 week, 2 weeks, 6 weeks, and 10 weeks, respectively).

After the processing mentioned in previous sections, the head model is transformed to the frontal position. We can now measure the cranial asymmetry. The OHD values of four head models at distinct visits are computed, as shown in Fig. 8. It is clear that the OHD values are monotonously decreasing. This means that the symmetry of left and right head is improving as time elapsed. The volumes of left head and right head are shown in Fig. 9, respectively. The volume of right head is gradually close to that of the left head. At the same time, the volume increment of left head is significantly smaller than that of the counterpart. This indicates that the treatment slows down the growing of left head while leaving space for right head to grow.

Fig. 8. OHD values at distinct visits.

Fig. 9. Volume measures at distinct visits.

To further visualize the head differences before and after the treatment, we register two head models constructed at 0 week and 10 week and plot the HD-map which illustrates the differences of HD values within a specified region wrapped around the circumference of head, whereθ,yand the intensity at (θ,y) corresponds to thex-axis,y-axis and the HD value, respectively. The bottom row of Fig. 10 depicts the HD-map of a sample subject. Five pictures of the top row in Fig. 10 show the corresponding locations of the head model withθranging from 0 to 360. The middle of the HD-map is near the back of the head. Large values near the middle position of the map indicate great improvement in the back of the head. This observation is coincident with the previous results.

Fig. 10. HD-map indicates great improvement in the back of the head.

6. Conclusions

In this paper we utilized the covariance analysis andk-means classification to locate facial features. An automatic pose alignment method was applied to transform the head models of infants. OHD criterion was then used to compare the mirrored head model and head model after treatment for asymmetry measurement. Experimental results show the effectiveness of our method. This system can provide pediatricians a useful tool for diagnosis of cranial asymmetry.

[1] A. Turk, J. McCarthy, C. Thorne, and J. Wisoff, “The ‘back to sleep campaign’ and deformational plagiocephaly: Is there cause for concern?”Journal of Craniofacial Surgery, vol. 7, no. 1, pp. 12-18, 1996.

[2] B. Collett, D. Breiger, D. King, M. Cunningham, and M. Speltz, “Neurodevelopmental implications of deformational plagiocephaly,”Journal of Developmental and Behavioral Pediatrics, vol. 26, no. 5, pp. 379-389, 2005.

[3] B. L. Hutchison, A. W. Stewart, and E. A. Mitchell,“Characteristics, head shape measurements and developmental delay in 287 consecutive infants attending a plagiocephaly clinic,”Acta Pediatrica, vol. 98, no. 9, pp. 1494-1499, 2009.

[4] B. L. Hutchison, L. A. D. Hutchison, J. M. D. Thompson, and E. A. Mitchell, “Quantification of plagiocephaly and brachycephaly in infants using a digital photographic technique,”Cleft Palate-Craniofacial Journal, vol. 42, no. 5, pp. 539-547, 2005.

[5] I. Atmosukarto, L. G. Shapiro, M. L. Cunningham, and M. Speltz, “Automatic 3D shape severity quantification and localization for deformational plagiocephaly,” inProc. SPIE Medical Imaging: Image Processing, 2009, DOI: 10.1117/12.810871.

[6] R. E. Behrman, V. C. Vaughan, and W. E. Nelson,Nelson Textbook of Pediatrics, 13th ed. Philadelphia: WB Saunders, 1987, pp. 1338-1339, 1354.

[7] P.-Y. Chang, Y.-W. Chien, F.-Y. Huang, N.-C. Chang, and D.-B. Perng, “Computer-aided measurement and grading of cranial asymmetry in children with and without torticollis,”Clinical Orthodontics and Research,vol. 4, no. 4, pp. 200-205, Nov. 2001.

[8] T. R. Littlefield, K. M. Kelly, J. C. Cherney, S. P. Beals, and J. K. Pomatto, “Development of a new three-dimensional cranial imaging system,”Journal of Craniofacial Surgery, vol. 15, no. 1, pp. 175-181, Jan. 2004.

[9] B. L. Hutchison, L. A. D. Hutchison, J. M. D. Thompson, and E. A. Mitchell, “Quantification of plagicocephaly and brachycephaly in infants using a digital photographic technique,”Cleft Palate-Craniofacial Journal, vol. 42, no. 5, pp. 539-547, Sep. 2005.

[10] S. Lanche, T. A. Darvann, H. Olafsdottir, N. V. Hermann, A. E. Van Pelt, D. Govier, M. J. Tenenbaum, S. Naidoo, P. Larsen, S. Kreiborg, R. Larsen, and A. Kane, “A statistical model of head asymmetry in infants with deformational plagiocephaly,” inProc. of the 15th Scandinavian Conf. on Image Analysis, 2007, pp. 898-907.

[11] M. Pauly, M. Gross, and L. P. Kobbelt, “Efficient simplification of point-sampled surfaces,” inProc. of the Conf. Visualization, 2002, pp. 163-170.

[12] T. Rabbani, F. A. van den Heuvel, and G. Vosselman,“Segmentation of point clouds using smoothness constraints,” inProc. of ISPRS Commission V Symposium Image Engineering and Vision Metrology, 2006, pp. 248-253.

[13] Y. Liu and J. Palmer, “A quantified study of facial asymmetry in 3D faces,” inProc. of the IEEE Int. Workshop on Analysis and Modeling of Faces and Gestures, 2003, pp. 222.

Chun-Ming Chang received the B.S. degree from National Cheng Kung University, Tainan in 1985 and the M.S. degree from National Tsing Hua University, Hsinchu in 1987, both in electrical engineering. He received the Ph.D. degree in electrical and computer engineering from University of Florida, Gainesville in 1997. From 1998 to 2002, Dr. Chang served as a senior technical staff member and a senior software engineer with two communication companies, respectively. He joined the faculty of Asia University in 2002. His research interests include computer vision/image processing, video compression, virtual reality, computer networks, and robotics.

Wei-Cheng Li received the B.S. degree from National Dong Hwa University, Hualien in 2010 and M.S. degree from National Tsing Hua University, Hsinchu in 2013, both in electrical engineering. He is currently a firmware engineer in Compal Electronics, Inc., Taiwan.

Chung-Lin Huang received his Ph.D. degree in electrical engineering from the University of Florida, Gainesville in 1987. He was a professor with the Electrical Engineering Department, National Tsing Hua University, Hsinchu. Since August 2011, he has been with the Department of Informatics and Multimedia, Asia University, Taichung. His research interests are in the area of image processing, computer vision, and visual communication.

Pei-Yeh Chang received the M.D. degree from the School of Medicine, National Taiwan University, Taibei in 1978. He is the Chief of the Department of Pediatric Surgery, Chang Gung Children’s Hospital and associate professor, School of Medicine, Chang Gung University, Taoyuan. He was the Observer Fellow in Pediatric Urology Department, Children’s Hospital, University of Pennsylvania in 1989. Prof. Chang is a member of Pacific Association of Pediatric Surgeons (PAPS), American Association of Pediatrics, Urology Section. He was the former President and Taiwan Section Governor of Asia-Pacific Association of Pediatric Urologists (APAPU). He served as the President of Taiwan Association of Pediatric Surgeons from 2008 to 2010. His clinical specialties include pediatric urology and pediatric surgery.

Manuscript received December 15, 2013; revised March 18, 2014.

C.-M. Chang is with the Department of Applied Informatics and Multimedia, Asia University, Taichung (Corresponding author e-mail: cmchang@asia.edu.tw).

W.-C. Li is with the Department of Electrical Engineering, National Tsing Hua University, Hsinchu (e-mail: alwaysxxl@gmail.com).

C.-L. Huang is with the Department of Applied Informatics and Multimedia, Asia University, Taichung (e-mail: huang.chunglin@gmail. com).

P.-Y. Chang is with the Department of Pediatric Surgery, Chang Gung Memorial Hospital, Kaohsiung (e-mail: pyjchang@cgmh.org.tw).

Digital Object Identifier: 10.3969/j.issn.1674-862X.2014.04.013

91香蕉高清国产线观看免费-97夜夜澡人人爽人人喊a-99久久久无码国产精品9-国产亚洲日韩欧美综合