Results Driven Accountability Effort—Question Four

OSERS‘ Office of Special Education Programs (OSEP) appreciates the comments and suggestions posted in response to the RDA questions one, two, and three. OSEP will accept comments on question 4 until October 19, 2012.

RDA Question #4:

OSEP is committed to developing a results-driven accountability (RDA) system that leads to increased state and local capacity to improve academic results and functional outcomes for children with disabilities. As part of this effort, OSEP asked the National Center on Educational Outcomes (NCEO) to work with a small group of stakeholders and assessment experts to provide input on measures that could be used to review states’ performance results of their students with disabilities who receive special education services. The group’s recommendations are contained in a report, Using Assessment Data as Part of a Results-Driven Accountability System: Input from the NCEO Core Team (Word | PDF). In addition, OSEP asked NCEO to develop sample approaches for how measures included in the report could be used by OSEP, which are included in the companion report: Sample Approaches for Using Assessment Data as Part of a Results-Driven Accountability System (Word | PDF). What is your feedback on these reports? What other data sources may be useful as we move forward in the development of a RDA system?


  1. I am writing on behalf of the Statewide Parent Advocacy Network (SPAN), NJ’s “one-stop” for families of children birth to 26. We have long supported enhancing the accountability of schools, districts, and states for the performance of all students, including those students most at risk such as low-income students, students of color, immigrant and LEP students, and students with disabilities. We find it ironic that the US Department of Education Office of Special Education Programs would consider using performance of students with disabilities on state tests as a key results-driven accountability factor at the same time that the Secretary of the US Department of Education has authorized “waivers” of the accountability provisions of the Elementary and Secondary Education Act that allow states to set lower performance standards for students with disabilities, low-income students, etc. We see these waivers as letting states, districts and schools “off the hook” and contradictory to a results-driven accountability process.

    We have reviewed the proposal and discussed them with other parent centers across the country, disability advocates, and parents. We support the concept of using the performance of students with disabilities on state assessment as one qualitative result, or outcome, that should be reviewed in determining the need for further US Department of Education monitoring and technical assistance (TA) aimed at improving performance. However, we have several concerns and recommendations.

    First, the performance of students with disabilities of color and LEP students with disabilities should be reviewed in addition to the performance of students with disabilities overall in identifying states that may need further monitoring, TA, and corrective action. Since students of color and LEP students overall underperform on state tests compared to white students, states with higher numbers of such students could be unfairly compared to students with primarily white students. Further, a state should not be able to escape closer scrutiny simply because its majority white students with disabilities are doing well, if students of color and LEP students with disabilities are not.

    Second, the performance of students with disabilities on state assessments should not be the only result or outcome that is reviewed by the US Department of Education in identifying states for further monitoring, TA, and corrective action. While performance on state assessment is important for students with disabilities, other criteria are also important for their future – the extent to which they have been included with students without disabilities, for example, is critical to ensuring that they have developed the friendships and social skills needed to navigate a non-segregated world. The extent to which they are provided with effective and meaningful transition services, and information about what students with disabilities are doing after graduation (i.e., working competitive employment, going to post-secondary education, sitting at home, etc.) is also critical. And the effective engagement of families in the special education process makes a great deal of difference in a student’s education as well as their post-education life. We would not support the use of a single indicator to trigger additional monitoring, TA, and corrective action. Further, any triggers must include a review of disparate outcomes by race, ethnicity, and language status.

    Third, the US Department of Education must continue to monitor states for compliance issues, not just for results issues. Parent engagement, inclusion, evidence-based services, effective transition services, timely services, etc., are all important components of a system that leads to effective outcomes. Outcomes do not happen just because we set high standards or report on the extent to which those standards are met, although these are both important. To reach positive outcomes, there must be opportunity to learn, and the components identified above are vital to ensuring that students with disabilities have the opportunity to learn.

    Finally, SPAN agrees with most of the framing considerations: Public transparency and understandability are critical features of a results-driven accountability system and must be reflected in measures used to review states on student performance.
    Multiple measures must be included. No single measure should be used in making decisions about student performance results. The use of measures of student performance should provide appropriate incentives to states, particularly in relation to identified values (e.g., inclusion in the general assessment). The measures should provide a flag to look deeper into areas that need improvement. A plan should be developed and steps taken to monitor, validate, and improve the use of measures by OSEP and others; additional variables may be appropriate to enhance the measures in the future. Variables that may be related to student performance but that have inconsistent interpretations and reliability should not be included in measures that are used for reviewing states on the performance of their students with disabilities. However, we do not agree with the final framing consideration, that states should not have to collect any additional data in this shift to a results-driven accountability system. We also support the Core team recommendations. In terms of specific approach, we do not support any comparison of actual performance with the state proficiency target, particularly given the new waivers that allow for lower proficiency targets to be set for students with disabilities than for other students, and given the fact that states have set a range of proficiency targets for their students; states that have set a higher bar should not be penalized because they do not reach that higher bar, especially since they may actually be outperforming states that set lower bars. Race, ethnicity, and LEP disaggregated data should be a key consideration if we are ever to close the achievement gap.

    In closing, the Statewide Parent Advocacy Network of NJ appreciates the opportunity to submit these comments on the proposed use of the performance of students with disabilities on state tests as part of a results-driven accountability process. However, we believe that a major shift such as this one should not occur without a much larger public conversation where diverse families, self-advocates, disability advocates, and educators and administrators have the opportunity to share their experiences and ideas for consideration by the US Department of Education.

  2. The Collaboration to Promote Self Determination (CPSD) is a coalition of national organizations that advocates for innovative public policy reform focused on promoting the effective transition of students with intellectual and developmental disabilities into adulthood by preparing them to pursue and obtain optimal outcomes in the areas of employment, economic advancement, and independent living.

    The undersigned members of CPSD submit the following comments in response to OSEP RDA question #4, which focuses on the recommendations that were made by the NCEO Core Team.

    Autistic Self Advocacy Network
    Council of Parent Attorneys and Advocates
    National Down Syndrome Congress
    National Down Syndrome Society
    Physician-Parent Caregivers, Inc.
    United Cerebral Palsy

    The following comments are on the Results Driven Accountability framing considerations and other data sources that should be used for RDA. Given the detailed technical nature of this RDA question and the allowable response word limit, separate comments are posted on the tables.

    We appreciate the work of the National Center on Educational Outcomes (NCEO) Core team, which provided input to OSEP on measures that could be used to review states’ performance results of their students with disabilities who receive special education services. It is our understanding that the work of the NCEO Core group was limited to current State Performance Plan indicator 3. The report states under framing consideration #6 that LRE was not included as a variable in the assessment tables, at this time. We want to be clear in stating our position that LRE must continue to be a key State Performance Plan indicator as OSEP moves to Results Driven Accountability (RDA), even if it is not a variable in the assessment performance tables. Also as states drill down with the data from the tables, they should receive technical assistance from OSEP if LRE data, disaggregated by disability category, does not meet accepted targets or when there are large state-to-state discrepancies.

    CPSD also has a comment on framing consideration #7. We agree that it is important to ensure that RDA is streamlined, relevant, and meaningful and that the data collection burden on states should be reduced, if possible. However, the primary consideration needs to be on capturing measures that address the fundamental purpose of monitoring, which is to determine if students are actually benefiting from the educational services they are receiving. This may involve reducing burden in some areas and increasing in others to obtain meaningful data. We know that improvement occurred on the SPP compliance indicators which were the focus for state determinations but we do not want to lose ground on these indicators because we shifted too much attention away from them.

    Comments Regarding Other Data Sources for RDA
    Sources for the following data should be used in addition to assessment data. The data should be disaggregated for each state by disability category and by type of assessment, if possible.
    • Educational environment
    • Exit data
    • Post-school college and career outcomes
    • Suspension and Expulsion
    • Parent involvement
    • Disproportionality
    • Compliance with timelines related to Part C to Part B transition; due process, mediation and complaint resolution
    • Accuracy and timeliness of state reported data
    • Use of Restraint and Seclusion

  3. The undersigned Collaboration to Promote Self-Determination members submit the following comments

    Autistic Self Advocacy Network
    Council of Parent Attorneys and Advocates
    National Down Syndrome Congress
    National Down Syndrome Society
    Physician-Parent Caregivers, Inc.
    United Cerebral Palsy

    Tables 4 and 5
    1. These two tables for alternate assessments should break out data for AA-AAS and AA-MAS (if the state has one) instead of combining the data. These are very different assessments. It is important to have clear data for each one in order to appropriately guide technical assistance.

    2. The data regarding proficient performance on the AA-AAS should be broken out to show the percent of those students who received advanced scores in order to ensure that states are not misidentifying students for the AA-AAS. A high number of “advanced” scores may suggest inappropriate participation decisions. Depending on the design of the AA-AAS, it may suggest that some “advanced” students are more appropriately moved to the AA-MAS or general assessment. Alternatively, it may suggest that the AA-AAS is of very low rigor, based on low expectations for students with the most significant cognitive disabilities. States should monitor to know why these high rates are occurring to avoid negative consequences. We support design of AA-AAS that do not have ceiling effects for students who are well taught to show the range of what they have learned. With an appropriately rigorous assessment and clear participation policy implementation and monitoring, we would not expect to see high percentages of students achieving advanced.

    3. There should be a measure for determining relative difficulty of the AA-AAS for each state. The report recommends the comparison of the general assessment to the NAEP as a possible method for determining the difficulty level of the general assessment. We understand that there is no alternate assessment for NAEP to use in a comparison to the state’s AA-AAS. We have long advocated that one be developed. However, a method must be developed for determining the relative difficulty of the state’s AA-AAS, for all the same reasons the Core team recommended this element for the review of general assessment data. One method to be considered is the comparison of the performance of students with disabilities on the general assessment to the performance of the students taking the AA-AAS. If these students are in an appropriately challenging assessment for their abilities, the percent proficient should be similar as long as the state’s general assessment has not received a low score for difficulty.

    5. Tables 4 and 5 should provide data comparing the proficiency of students taking the AA-AAS and students with disabilities taking the AA-GLAS or general assessment. We need to know whether students taking the AA-AAS have higher proficiency rates than students with disabilities taking the AA-GLAS or general assessment because that is a red flag concerning the difficulty of the AA-AAS and/or the IEP decision-making process to determine eligibility for the AA-AAS. This data is key to follow through on recommendation #4, above.

    Table 6
    We recommend that participation data for each assessment be broken out by disability category so the percentage of students with intellectual disabilities (ID) who are placed in the general assessment or AA-MAS as compared with the AA-AAS will be evident. Only students with the MOST significant cognitive disability are supposed to be eligible for the AA-AAS. This is a small percentage of all students with ID but we suspect that a high percentage of students with ID are placed in the AA-AAS. This break out can be placed in Table 6 or a separate Table. We understand that many states collect data by disability. In fact, 619 data by disability category was available on the website until recently.

    Sample Approaches for Using Assessment Data
    Our comments are focusing solely on how these approaches would affect students taking the AA-AAS. The purpose for analyzing assessment data using Sample Approach 1a, 1b, 1c or 2 is to prompt different levels of rating and support to apply to the states. Approaches 1a and 1b do not appear to use the data from Tables 4, 5 and 6 or any other data about the AA-AAS. Therefore the data from the A-AAS apparently would not factor into decisions about a state’s rating and the level of support it will need. We cannot accept an approach to using assessment data that only considers data from the general assessment and AA-GLAS.

    It is not clear whether the first 3 steps of Sample approach 2 consider data from AA-AAS on Tables 4- 6. There is much discussion of the NAEP, which of course doesn’t apply to the AA-AAS. If data from Tables 4- 6 is being considered, then further explanation is needed to see how that works. If data from these tables is not factored into steps 1-3B, then AA-AAS data will only be considered in step 4 if a state is not flagged while going through the earlier steps of review that focus on the general assessment. This is problematic because technical assistance to improve the performance of students with disabilities on the general assessment is not necessarily going to address the problems that exist for students taking the AA-AAS.

    Sample approach 1c seems to be the only approach that considers data from every state assessment, including the AA-AAS, However, that data is limited to comparing the “gap between percent of students proficient on regular assessment and percent of students proficient on the alternate assessment based on alternate achievement standards of students with disabilities on all assessments.” This wording is difficult to follow and it is not clear whether the comparison is between students with disabilities on all assessment or between all students in the general assessment (with and without disabilities) and students taking the AA-AAS. Either way, this is not sufficient data to be considered when more detailed data is being considered from Tables 1-2, which only apply to the general assessment. No data from Tables 4-6, which apply to alternate assessments is even used in this approach.

  4. The American Speech-Language-Hearing Association (ASHA) is pleased to have the opportunity to comment on the U.S. Department of Education Office of Special Education Program’s Results Driven Accountability outreach efforts soliciting comments on this new strategy to shift the balance from a system focused primarily on compliance to one that puts more emphasis on results for children with disabilities. ASHA is the professional, scientific, and credentialing association for more than 150,000 members and affiliates who are audiologists, speech-language pathologists (SLPs), and speech, language, and hearing scientists. More than half of our members work in public schools and have an integral and active role in the school community. Therefore, education is a priority area for the Association. ASHA members who work in a school-based setting provide primary services to over 1.5 million students with speech, language and hearing disabilities. (2010, Data Accountability Center) The high incidence of speech-language impairments requires a large, highly qualified pool of SLPs to meet these students’ needs.

    ASHA is pleased to submit the following comments to Question 4 on the RDA Blog:

    Although the recommendations from Using Assessment Data as Part of a Results-Driven Accountability System: Input from the NCEO Core Team are appropriate, it does not specifically account for dynamic assessments that are so important for determining progress of students with special needs, as well as for students who are English Language Learners. Dynamic assessment is a complex assessment process in which there is active engagement between the assessor and the learner that focuses on the learner’s problems solving skills in a context that measures the learner’s response to intervention (RTI). RTI has proven to be an effective part of the process to identify students with communication impairments. Therefore, incorporating dynamic assessment procedures based in curriculum and a variety of intervention strategies is recommended.

    Students with special needs may show little progress on standardized assessments, but that may not truly reflect their growth. Growth is demonstrated not only by content acquisition, but also by increasing independence on task completion. Growth models can capture small–but important–changes in academic improvement in children with disability categories, such as speech or language impairment and struggling learners. It should measure a student’s performance in a variety of settings, including settings that support transition goals.

    Additionally, the progress monitoring strategies and methods inherent in RTI provide valuable information on student achievement. There are many assessment strategies that have proven to be successful in documenting student progress based on individual student needs and strengths.

    Measuring and adapting to the workloads of professionals responsible for student progress is an important component of improving functional and academic outcomes. One must have an acceptable workload to allow for consultation, collaboration, planning, and teaming, all of which support student outcomes and the provision of FAPE.

    The Common Core State Standards (CCSS), along with postsecondary transition goals, can provide useful academic and functional measures of progress particularly if they are integrated into a student’s IEP goals. We may be more successful in documenting student progress if we take into account the individualized goals for students with special needs. By linking IEP goals to the CCSS and transition goals, we are reinforcing and measuring skills and understandings that all students are expected to learn by the time they graduate. Additionally, by linking with the CCSS, related services providers, including SLPs, are providing additional opportunities for learning and reinforcing current content.

    Assessment results should be used, not to punish, but to support and enhance both the students’ and the professionals’ skills. Also, accessibility on assessments is important for students with disabilities. It is essential that modifications, accommodations, or auxiliary aids or services be appropriately provided to students with disabilities in testing situations.

    Thank you for the opportunity to provide these comments on behalf of ASHA’s members and the students they serve.

  5. All the sample approaches seem to overlook an important problem:

    Even if a state is effective at identifying students who need special education and quickly closing their achievement gaps, their special education population at any given time will lag behind their peers in achievement, because their most successful students with IEPs will have exited special education, and students with greater need are always being identified. “Educational need,” which in practice coincides closely with academic need, is after all one of the qualifications for having an IEP.

    Under the umbrella of “special education,” there is an important distinction to be made between a) students who require special education supports and interventions in order to demonstrate achievement commensurate with their peers, and b) students who require a modified curriculum, more suited to their goals and abilities. We formally recognize membership in the second group when we determine eligibility for one of the state’s alternative assesssments.

    We measure success for the first group of students by determining whether through special education we managed to arrest or reverse the widening achievement gap that helped them into special education in the first place. This is done by comparing academic growth relative to peers before and after placement, and by including students who have recently exited special education in the comparison group.

    We measure success for the second group by determining whether they have made growth from year to year on annual assessments based on alternative standards, and by whether they achieved their individualized IEP goals.

    We can also measure success for either group by comparing their post-school outcomes to the post-school outcomes of peers without disabilities.

  6. Just in need of some clarification, will the NCEO draft RDA data measures (reading/math) be the only indicators states would be evaluated on? Sorry if this was already addressed in prior questions.

  7. Thank you for this opportunity to comment on Question 4. I applaud Secretary Duncan’s re-direction of OSEP to outcomes from paperwork compliance. The Secretary has called for this paradigm shift because the mosaic that constitutes Free Appropriate Public Educations (FAPEs) across this nation has previously emphasized the form of IEPs rather than their functional outcomes. The domains that constitute outcomes are key. Being the parent of a child with a disability, a special educator and an advocate for the rights of people with disabilities, I believe those domains should include employment and community integration, in addition to academic competence. These domains populate the universe of educational and functional outcomes. These domains are transition outcomes.

    I believe it is important to recognize that there are two diverse constituencies in this significant paradigm shift from paperwork compliance to transition outcomes: Consumers (i.e. students and their parents); and Providers (state governments and Local Education Agencies, LEAs). I believe the posted responses to RDA questions one, two and three reflect the interests of the diverse consumer constituency.
    The Framing Considerations identified by the Core Team include public transparency and understandability but exclude the collection of additional data. I believe including data from the domains of employment and community integration would not create an additional burden the provider constituency. These data can be collected using exisiting resources. Moreover, these data would have substantially greater practical statistical significance for both constituencies.

    It is essential to recognize the diverse needs of these two constituencies with respect to the transparency and understandability Framing Considerations. What might be transparent and understandable to a state government and LEA may not be transparent and understandable to a student and his or her parents. The Core Team recognizes the complexity of the variables that ultimately may be reduced to principal components and seeks to avoid it due to an anticipated cost to transparency. However, I believe we can evaluate outcomes in a statistically sound fashion (i.e. using reliable and valid data and analysis) that has practical statistical significance as well as meet the transparency and understandability Framing Considerations. The transition plan is the bridge that spans these constituencies.

    Asking the right question is fundamental to sound statistical analysis. I assert the better question at this point should be: To what extent do the outcomes articulated in the transition plans for students with disabilities correlate to the actual outcomes? This question provides a parsimonious solution that recognizes the needs of both the consumer and provider constituencies. Students and parents have a say in setting transition goals or outcomes. LEAs have the obligation to provide a FAPE. One would expect that a high positive correlation should result if the parents and student voices are heard and a FAPE is delivered.

    Accordingly, what could be more transparent and understandable (i.e. accessible) to the public than, “Was your child’s transition plan successful?” This question immediately can be addressed through qualitative methodology and data collection. The follow up question, “To what extent did the FAPE your child received result in academic competence?”, may be addressed through quantitative methods.
    The proposed RDA and sample approaches, thoughtfully devised and articulated, will help clarify the above follow up question. The methodology used to answer this question is quantitative and should provide actionable data accessible (i.e. transparent and understandable) to the provider constituency.

    The CORE Team is wise to examine the quantitative questions of student performance using multiple measures due to vast universe of variables associated with the mosaic that constitute FAPEs. The Team may use multivariate statistics for analysis and post hocs to reach conclusions with practical significance.

    If we truly want to shift the paradigm from paperwork compliance to evaluation of transition outcomes, we must ask the right questions. Qualitative methodology will provide understandability for consumers and quantitative methodology will provide transparency for providers. Both consumer and provider constituencies will benefit if FAPEs result in more students with disabilities achieving their employment and community integration goals along with individualized academic competence.

  8. I agree with most of comments, especially the ones regarding IEP paperwork continually being redundant. We already do not have all of the listed assessment options for our students with disabilities within our state. We have a large population of students with and without disabilities that do not earn a State High School Diploma due to previous changes in the state legislation regarding State High School Diploma Courses of Study. However, all of these students are being compared to those students without disabilities who are preparing for and can reach post-secondary educational four year university goals. As a whole federal and state regulations are still trying to fit “round pegs” into “square holes”. There has to be some sort of middle ground. It is not fair to hold educators responsible when the legislators are making the rules and never been in a classroom full of cross-categorical disabilities but yet “NO child can be left behind”. I am submitting a request that all of those making these decisions take into consideration the individual populations within each state and that not everyone has access to the availability of public services (transportation, etc) like others. Some of our students have never even been to a movie, a play, or a ball-game. To me, a student having the ability to choose and participate in a desired activity, having the ability to express these choices and desires, having the ability to advocate for self, having the ability to “fit” or be “successful” within the community and/or workplace is the most important. It has been documented time and time again that “academic ability” does not always measure success in the “real” or “normal” world but success usually comes when someone displays skills to include integrity, dependability, continued effort, ability to get along with others, initiative, and flexibility. Therefore, when measuring accountability and determining success student progress needs to be measured using an overall, wholistic picture of the student and not just on academic ability. We all have worked at some point with someone who has demonstrated a great academic ability but are not sucessful because they lack the ability to get along with others. I have experience placing a student with a disability on a job that had tremendous academic testing success but lacked the reasoning knowledge keep from dropping his pants to tuck in his shirt whereever he was standing. He would have been considered above “proficient” on testing but to this day can not seem to hold a job because of our inability to anticipate what inappropriate behavior he will display next. I also think that in this decision making process, all types of employers need to have input and need to be educated regarding the fact that human beings are all different, from different walks of life, with different abilities and most of all on learning how to adapt and be flexible when presented with someone that may not be the most academic proficient but would still make a good employee. I see this even in our workplaces when we expect everyone to have a minimum of a four year degree when actually it does not always take that to actually perform the job.

  9. After reviewing NCEO’s documents, Sample Approach 1b, Decision Matrix (Without State Proficiency Target), appears to be the most objective approach. The proficiency targets set in State Performance Plans vary from state to state. Some states set targets for students with disabilities that are the same as the targets for all students while other states may set lower targets for students with disabilities. Measuring a state’s success in reaching its targets may act as an incentive to lower its targets! This would seem contrary to one of the framing considerations for RDA – The use of measures of student performance should provide appropriate incentives to states….
    Sample Approach 2: Decision-Making Steps, while listing the least number of “Cons” appears to be the most subjective, will require that different people look at different states and will limit the comparability of results across states.

  10. I read with interest all of the previous comments and feel there are a lot of great points being made, such as why not subgroup (specifically autism), accountability is key, standard tests are they really a measure of success or progress? I have seen a child passed from year to year without achieving his annual IEP goals but it doesn’t seem to matter. Its like the standards are so low he is getting by without learning. He is not required to take standard tests so he falls into the cracks of accountability. For these studetns, it would be useful to look at progress from year to year, and it would also be very relevant to assess parents’ impression of progress. Often they are the best gatekeepers of their child’s IEP goals, as school staff changes from year to year. Parents satisfaction with curriculum, whether the team followed the IEP, whether the child had sufficient opportunites to achieve IEP goals, and whether the IEP goals seemed appropriate or not woudl be a start for a parent survey. Responsiveness of the team to parents’ questions and queries should be assessed, as they are an integral part of the team and deserve answers to their questions. I also agree that work skills are important, and would really like to see executive function curriculums.

    • You made some great comments.
      We truly don’t like to just look at the “data” because data can be changed easily. We love the fact that we were able to see the BIG difference within in the same school district, in the same town, only a few miles apart. Three years many challenging issues, data reviewed etc. Following the transition to a new school, the proof was in the pudding. The success was seen “consistently”, but it was unable to come out in the data that the state needed which saddened us.
      Parental in-put needs to be a part of this process. We know when our children are making progress or the lacktherof when it goes with or against the data.
      It is NOT a good thing to just look at the data. I don’t really trust all data anymore. I have found descrepancies in the past and just don’t rely on it unless it comes from an outside source doing testing.

  11. I believe the purpose of all educational programs is to prepare the student for employment. This would require tracking students over time, but it could be linked to the state’s employment information which provides wage information on individuals. I believe special education programs should focus more on work skills.

  12. Frist off it would be nice to see a Part C example.

    This feels like NCSEAM ranking for focused monitoring selection in which case if this kind of ranking or these matrices are used to determine which states are red so they’ll get needed TA then I’m not sure how this will improve practice nationwide particularly for the yellows or barely greens. I would support a data transparent option vs. some that are less transparent.

    • I agree and wonder if similar efforts/papers are in the works for Part C. Dr. Musgrove’s introduction to the paper states that results work involves infants and toddlers, but NCEO’s work is just focused on Part B.

  13. I have seen several comments here that allude to the same issue I see. Special Ed is treated as a singular population, yet it is so varied in composition. Because the population is only a few percent of the full population, changes in composition affect the “measured success” of any cohort over time. There are certainly some groups that are large enough to have some measure (or even directional read) inside Special Ed – Autism and Down syndrome come to mind.

    As a parent I can see how a school or district is performing for Special Ed when choosing where to live. But I cannot see how they are doing for my child. A school that does well because its teachers are adept at dealing with disruptive behaviors may not be so good at dealing with other types of learning issues.

    Without a deeper level of granularity, you cannot measure school-level results very well. Students could learn and achieve more or less due to their own skills or disabilities – rather than teacher effectiveness.

  14. This is a reasonable approach given the assumptions that were laid out ahead of time. In a sense, one’s hands are tied in making much of a different system than what we have for AYP if only existing, publicly available, data will be used. In other words, under these assumptions, the challenges with NCLB for students with disabilities and the professionals who serve them will continue here. IEPs and student progress, holistically conceived, is not typically public data, and therefore the system is relying on standardized measures of their progress. Depending on how much the new measures adhere to universal design, this approach may or may not be accessible to all students.

    I did wonder if there was a way to look at an individuals child’s growth from a previous year and use that slope as a predictor for later growth, then comparing actual gain with expected gain. You would then have some context for where the child is and how much growth is a part of their developmental path. The reliance on grade level proficiency targets without a sense of where a student currently is in her journey is a difficult criteria to fit with individuals who may be more than a year or two below grade level or who may need more time to reach their goals.

    • I agree completely, and would add that quantitative data about IEPs — such as numbers of students who have mastered an individualized math goal this year — could certainly be made public, without violating confidentiality. I think it may be helpful to remember that much of the data we currently collect in order to evaluate our state performance plans was not collected until we saw the need to do so.

  15. I COMPLETELY agree with comments such as Valerie’s. I work in special Ed for the district in which I reside. I am also raising a 10 year -old with learning disabilities, ADHD and PDD-NOS. Many of our children in special education are indeed expected to meet the same standards in exams and it simply makes no sense. In addition, I have become increasingly frustrated when teachers are being told to use manipulatives and other various tools to assist children with learning, but then those “tools’ are then taken away when a standardized test is given? Our children are not standard learners, but yet are given the exact same tests and expected to master them. My son has all of this extra support and one on one help,yet he continues to struggle. Classroom sizes are entirely too large now, making it much more difficult for children to learn and for teachers to actually teach. Example: A fourth grader who has ADHD and sensory issues with difficulty learning should not be in a classroom with 32 kids.

  16. The report appears to take into consideration the vast majority of influencing and mitigating factors that goes into assessing students with disabilities. As we work very hard to accommodate the needs of all learners, we should see less of the ‘higher ability’ types of Special Education eligible students – like the Learning Disabled – needing to be identified as Special Education eligible any longer. This should drop our percentages of Special Education eligible students who are able to pass a proficiency test. In other words, worse proficiency results should tell us we are doing the right kinds of things by not identifying the ‘higher ability’ Special Education students any longer.

    • I’m not so sure this is true. Are you saying then, that a child who can pass a proficiency test should not be in Special Education? Because Special Education is much more then passing proficiency tests if you are referring to a test that measures academic ability. There are also social/emotional components to consider in Special Education.

      • Acknowledged, but what percentage of EDTs, would you guess, focus primarily on social/emotional impacts when determining educational need? It’s no accident that students who are currently being served in special education, when considered as a sub-population, tend to underperform relative to their peers.

  17. I agree with the general concepts in the reports. I am concerned however that there is no consideration to subgroups other than special ed. as a general category. It doesn’t address underserved populations which face problems like disproportionality and overrepresentation. This occurs even prior to identification in that I&RS may not provide services for the same time frame or services aren’t given to some children at the same level of intensity and they end up in special ed. Conversely, RTI is being inappropriately used to delay classification of students into special ed. and research shows that earlier intervention produces the best outcomes. ELL is another area of concern and needs to be addressed. And all of this means nothing w/o accountability.

  18. A few concerns have come up with using assessment data. First off, states are beginning to transition to new assessments in 2014-15 that probably will not be comparable to the current assessments. Seems premature to develop a system that will have to be re-designed in a few years. Data will not be comparable year to year as states move to PARC or SMARTER Balance. Additionally, there are significant concerns being raised regarding the accessibility of the new assessments. New assessments are largely online while classroom instruction still, for the most part is not. Additionally, of more significant concern is the accessbility of these assessments for students with sensory impairments/disabilities and the students who rely on Assistive Technology. These questions must be answered before developing a system that relies on assessment data.

    • Thank goodness that the department of education is considering how compliance demands take away planning and instructional time from those who need it most! We live in a litigious society, and I understand the importance of compliance. When I was a K-12 Special Education teacher I often had to choose between being efficient with my compliance duties and my students’ needs. My students always came first which meant I had to come in early and stay late.
      The IEP document has evolved into a document to avoid litigation. In a 7 plus page IEP, only about 4 pages are truly related to the students academic and or behavior needs. I have taught in 2 different states, and the redundancy of a number of items can be quite time consuming. Please do a national survey of special education teachers. Too many times policies are made without conversing with those in the field. Once again, I am delighted this issue is being studied.

      • The paper work process for special education is overwhelming and with many forms very redundant. Four pages to state the time considerations for placement, in different fashions. IEP pages can count to as many as seventeen!! My concern and dedication is for my students, just as Phyllis!! With the beginning of this school year I had to complete twenty two addendums because class minutes changed from 252 minutes to 256 minutes. These addendums created a total paper trail of over 110 pages. Parents didn’t want the paper and understood the change in time and were more confused about the fact of having to change all those pages for a four minute change. This was all done after school hours because my students come first!!

        In regards to RDA, it might be wise to assess data from parents. How do they feel their children are progressing. The IEP is as stated an Individualized Education Plan. No child on an IEP falls in a specific bracket or progresses at the same rate. I focus on social skills, employment skills, independent living skills to include consumer math, (all students are missing this) self determination skills, students understanding their disability and preparing all my students and parents with post secondary supports following the exit from high school. Some of my students are in the Inclusive Classroom and some are not! Some of my students may attend four year college programs, some two year, some technical, some may go strait to employment. We work on setting realistic postsecondary goals and focus on successful post secondary transition. Dedication is to equip all of my students with the skills they will need for life not to pass a test!

  19. planning for educational outcomes is only one small part of the planning necessary for all children-not only those with disabilities.
    person centered planning, as I’ve suggested before, is the most comprehensive approach to a successful outcome-for life. Parents identify what they want for their children, but the child, as she/he grows older, contributes their wants and hopes and dreams to the mix. the support and structure to support the in and out of school life is a defining factor in determining a successful, attainable and sustainable outcome. if the outcome is not sustainable, the plan is doomed to failure and the individual to despair.

  20. When my preschool child was first identified with autism, one of my biggest fears was that he would be denied a good education because of his disability. I doubted that he would ever be challenged with rigorous academic instruction and that expectations for his achievement would be low. I feared seeing his potential unrealized and any chance for a future lost. Providing “special services” seemed to take precedence over academic progress. There are strong tendencies in the system toward compliance with process and procedures followed to the letter, and I understand the legalities, but these can never be allowed to substitute for student achievement. I’m happy to say our son graduated with a 3.7 GPA in the regular ed curriculum and completed his Associate’s degree. However, it requires a strong effort and support from the entire team, the student’s persistence, and unwavering parental commitment to ensure this kind of outcome.

    • Congratulations to your son for his accomplishments. From experience I am sure that it was your insistance that he receive academic instruction that helped him achieve his goals. Until this happens for all children who have disabilities, we will continue to have many of them who do not realize academic success.

  21. The results that I want as a parent are these:

    1. I wanted my child to stay in school and graduate – she did. (System & Personal Outcomes)
    2. I wanted my child to have opportunities to be in groups with other children without disabilities in the classroom regardless of her disabilities AND to have meaningful instruction on how to relate as a member of the learning community – the older she became the more rare the opportunities became. (Personal outcomes).

    3. I wanted my child to be able to take a short car ride and during the trip across town to be able to talk about a variety of topics instead of the same one, for the entire trip, every day, for years at a time – results (well we got her head up and she will put away things she stims with if you say something highly motivating but not all of the time.

    4. I wanted my child to be able to select things to do at home during her free time besides watching TV, watching DVDs or listening to audio books – never got there – not something the school knew how to work on, no ABA at home available

    5. I wanted my child to learn to read and to read beyond a third grade level. – results she got to the 3rd grade level. She got to a sixth grade level (fluency only) on her own after formal reading instruction stopped. She reads functionally every day and has her job because of her ability to read (System and personal outcome).

    5. I wanted my child to learn to recognize amounts up to five without having to count them out, to remember the most basic math facts and to give the correct change using the dollar more amount. – Results – after 12 years of Math instruction she can do none of those things. (System and personal outcomes).

    6. I wanted my child to be able to live as independently as possible in a supervised apartment setting. She has the skills but not the behavior to do so. I begged for a FBA for years but apparently those were only for children who are ripping toilets out of the wall. Things are better now in that schools are more readily doing FBA now.

    7. I wanted my child to have speech therapy (language) in middle and high school – the results being that she would have better social conversation skills. – We were told that was only for non-verbal students.

    I am OK with results or process as long as students get what they need. I suspect that no matter what the focus we will continue to find ways to do only what we know how to do and not what we need to do until we learn how to do this differently and create a world where doing what is neccessary is funded, highlighted, supported and understood.

  22. I understand the need for accountability. But whenever I read things like your reports, I am struck by what is not said. “Children with disabilities” is a group that obviously cannot be categorized under one label. Their reasons for learning problems are as varied as they are. One child diagnosed with a learning disability in the areas of reading or math may have entirely different abilities and needs than another with same area of disability. That’s why they have an IEP. Expecting those children to show proficiency on a state reading or math exam is ridiculous. A child diagnosed with a reading disability in third grade, for instance, is identified in part, because they aren’t learning at the same rate as their peers. Expecting that because they receive individualized instruction they will suddenly start reading at grade level ignores reality. Measuring that student’s progress by whether or not they are proficient on their grade level exam is unfair—to them and the school teaching them. Where I work, I have seen children in 6th grade who are reading at a third grade level after 3 years of specialized instruction with 3-4 different teachers and 1:1 tutoring using evidence based programs. That’s how long it’s taken them to achieve that level. Considering they were reading below grade level when first identified, that’s progress. Yet those kids are expected to take the state proficiencies. They’ve made progress in reading but still can’t pass a grade level reading test–but are expected to? They will continue to make progress. Some will eventually reach grade level. And some won’t.
    No two children with a disability are the same. That’s why they have IEPs. It’s unfair to them to measure progress by comparing them to peers in their area of disability.

    As an SLP who has worked in schools since 1978, I have seen it all when it comes to federal and state regulations. I have watched my workload increase tremendously in terms of paperwork requirements to satisfy those regulations. I have had hours—weeks, by now– of continuing ed to understand the rules and regulations so that I am compliant with it all and it takes an increasingly longer part of my day. I also make sure I am current in my field through continuing ed and continually change what I do if something isn’t working. The only thing that has stayed the same through the years is the diversity of my kids—-and the certainty that well-meaning, earnest folks with big titles somewhere in bureacracy will meet and decide what needs to be done in schools they have never been to, with professionals and students they will never meet.

  23. First I think teachers need to be educated on the many different disabilities before we can measure any growth improvement. There are many disabilities that share characteristics of another, such as deafness and Auditory processing as well as ADHD.
    I have found a lot of educators have not heard of APD some special ed teachers may have but for example a student has high scores on standarized test, put them in a reg. class where a teacher is unfamiliar with this and he doesn’t work to his potential because he misses oral directions, is distracted and can’t concentrate. Most likely this student will not succeed instead he is deemed lazy with behavior problems. Disabilities are not always about the level of intelligence or physical appearance, so how can you measure what some call hidden disabilities. We have to include these types of students as well because they are the ones that fall through the cracks of the education system and have a high drop out rate, primarily because they become overwhelmed and frustrated.

  24. The danger in making “functional outcomes” subject to accountability is when it is used as justification for trying to change a student’s core values and personality traits. One of our rights in a free country is that everyone, including students, is free to be who they are, even if those traits aren’t valued by others. For example, many autistic and students are pressured into doing things that come harder for them (social engagement mainly) because teachers believe that in order to function in society, they will need those skills. Even if the trait in question is considered a disability or disorder, and even if those target skills are actually useful, it isn’t the role of the school to try to extinguish unwanted traits, if the student is not harming others. And the presence of a disability should not mean that students are denied an academic or elective subject, because they have to use the time in a special class to supposedly extinguish their core “deficits”. So when the demand for measurable “functional” outcomes is too high, it can lower the acceptance of all kinds of people, and can infringe on basic civil rights.

  25. While I completely understand the intention of an RDA system, there appears to be a misunderstanding about how vast the special education spectrum really is. The attempt to measure growth of many of these students is simply a political ploy, and will not result in any more real academic growth for these students.
    Do yourself and these students a favor, allow the professionals at the schools to challenge them at their own level without concern for governmental growth measures.

    • I’m skeptical. Will busy LEAs conduct coherent SPED program evaluation voluntarily, and even if they do, will the results be comprehensible to the families of children with disabilities?

Comments are closed.