Fire Station 22 Presentation | Design Commission, Bias Is To Fairness As Discrimination Is To

July 8, 2024, 2:54 pm

The Durango Fire Protection District has purchased a 3. The agenda includes a presentation from Chief Hal Doughty and moderated questions and answers from pre-submitted questions. Fire Station Building Committee Approved Building Layout – September 4th, 2014.

  1. Fire station floor plans pdf.fr
  2. Fire station floor plans pdf
  3. Fire station building plans
  4. Fire station floor plans pdf document
  5. Test bias vs test fairness
  6. Bias is to fairness as discrimination is too short
  7. Bias is to fairness as discrimination is to believe

Fire Station Floor Plans Pdf.Fr

If the district is adding a unique use that is not allowed by right within the Central Business district zone, they may need to go through the Planned Development process with planning commission and City Council reviews and approvals. Nighttime View Questions and Discussion WestEast Design Group, LLC Architectural, Interior Design, Planning Mechanical, Electrical + Plumbing 210. No decision were made. 0% found this document not useful, Mark this document as not useful. 0% found this document useful (0 votes). Grading and Drainage Plan. Fire Station Building Committee Projected Timeline by Reinhardt Associates. Fire Headquaters Projected Budget. Designer Interview Process Letter. This property contains the 9R administration building and the Durango Big Picture High School. No application has been submitted yet.

Fire Station Floor Plans Pdf

Save Floor plans for new Reading fire station at Ninth... For Later. Fire Station Building Committee Meeting Minutes – April 10, 2014. 576648e32a3d8b82ca71961b7a986505. Collins Center Public Safety Facilities Study. Fire Chief Hal Doughty presented an overview of the draft project to staff but said the project is not ready to be formally submitted to the city yet.

Fire Station Building Plans

Overlay Concept Demonstration of how the concept works at multiple levels from the city to the building. Items on the agenda: Discussion on alternate location to be considered for Downtown Fire Station. Is this content inappropriate? Fire Station 22 Presentation — original pdf. Main Entry Rendering Employee parking is on the south end of the site next to the cell tower. Covered Walkway Rendering The steel, almost Miesian nature of the covered walkway represents the technology side of the site while the trees to the north create a focal point of moving toward the nature side of the site. Discussion about the Historic Designation of the School Building.

Fire Station Floor Plans Pdf Document

View the full agenda and recording of the meeting at There was no public comment. Layout Plan Site Plan First Floor Plan Second Floor Rendering This image shows the public face of the station prominently addressing the major intersection. Training Facility Projected Budget. 298-acre piece of property at 201 E 12th St., outlined below in solid yellow, from Durango School District 9R. Any proposed use of the 9R property for a relocated downtown fire station will require a full review by the city including public input. © © All Rights Reserved. Agenda for Pre-Bid Conference – Central Fire Station. March 3, 2022: The fire district is hosting a public form from 6 to 8 p. m. on Thursday, March 10, 2022. Oct. 27, 2021: Durango Fire Protection District representatives met with city staff for an initial discussion on the district's conceptual plan for the Durango School District 9R building. Part of the area is covered with fans, and the other part is open to sky. Report this Document. Healing Nature of the Living Quarters Rendering The day room opens to an outdoor dining and grilling area. Training Facility Sub Committee Meeting Minutes – November 18, 2015. Diagram Concept Showing how the concept came together on the site with the individual and technology issues to the south by the cell tower and the collective, nature, and healing areas to the north by the grove of trees.

Jan. 18, 2022: The Durango Fire Protection District Board of Directors hosted a study session at 7:30 a. m., online or at 142 Sheppard Drive, to discuss the Downtown Fire Station. Description: The new station could be built at the site of a small playground. Traffic Study with Truck Routing Plan. Everything you want to read. Contacts: The Durango Fire Protection District Board members asked the City of Durango make their phone numbers available to the public: Water Study/Spill Cleanup – 99 Main Street. Meditation Areas Rendering The exterior lighting reinforces the idea of this being a 24-hour facility that serves as a beacon of safety to the surrounding neighborhood. The fire district's offer to buy the 9-R administration building is a transaction between the school district and the fire district. Connection Between Inside and Out Rendering Each day room has an indention in the façade that opens to an outdoor mediation area perfect for decompressing from the stress caused by going out on a call. Corner of Riverside and Faro Rendering Signage and massing combine to make a welcoming, intuitive, and easy-to-find entry. If the project adds more than 10, 000 square feet of new building area, the district would need a Major Site Plan Review Process with a Planning Commission review; or. The sequence is surrounded by trees on the west side and the purpose of their being there (the apparatus bay) on the east. Search inside document. Discussion about City and Fire District, community connections and joint efforts.

Of course, there exists other types of algorithms. One of the features is protected (e. g., gender, race), and it separates the population into several non-overlapping groups (e. Test bias vs test fairness. g., GroupA and. Predictive Machine Leaning Algorithms. That is, to charge someone a higher premium because her apartment address contains 4A while her neighbour (4B) enjoys a lower premium does seem to be arbitrary and thus unjustifiable. The issue of algorithmic bias is closely related to the interpretability of algorithmic predictions. An employer should always be able to explain and justify why a particular candidate was ultimately rejected, just like a judge should always be in a position to justify why bail or parole is granted or not (beyond simply stating "because the AI told us"). In the next section, we flesh out in what ways these features can be wrongful.

Test Bias Vs Test Fairness

Kamishima, T., Akaho, S., & Sakuma, J. Fairness-aware learning through regularization approach. A statistical framework for fair predictive algorithms, 1–6. Arts & Entertainment. If it turns out that the algorithm is discriminatory, instead of trying to infer the thought process of the employer, we can look directly at the trainer. Proceedings of the 2009 SIAM International Conference on Data Mining, 581–592. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This is conceptually similar to balance in classification. Lum, K., & Johndrow, J. Consequently, a right to an explanation is necessary from the perspective of anti-discrimination law because it is a prerequisite to protect persons and groups from wrongful discrimination [16, 41, 48, 56]. 3, the use of ML algorithms raises the question of whether it can lead to other types of discrimination which do not necessarily disadvantage historically marginalized groups or even socially salient groups. AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making. Bias is a component of fairness—if a test is statistically biased, it is not possible for the testing process to be fair.

2017) demonstrates that maximizing predictive accuracy with a single threshold (that applies to both groups) typically violates fairness constraints. First, the context and potential impact associated with the use of a particular algorithm should be considered. For instance, in Canada, the "Oakes Test" recognizes that constitutional rights are subjected to reasonable limits "as can be demonstrably justified in a free and democratic society" [51]. If we worry only about generalizations, then we might be tempted to say that algorithmic generalizations may be wrong, but it would be a mistake to say that they are discriminatory. Understanding Fairness. United States Supreme Court.. Insurance: Discrimination, Biases & Fairness. (1971). Thirdly, and finally, one could wonder if the use of algorithms is intrinsically wrong due to their opacity: the fact that ML decisions are largely inexplicable may make them inherently suspect in a democracy. Footnote 10 As Kleinberg et al. Bias is to fairness as discrimination is to. There are many, but popular options include 'demographic parity' — where the probability of a positive model prediction is independent of the group — or 'equal opportunity' — where the true positive rate is similar for different groups. Bechavod, Y., & Ligett, K. (2017).

Miller, T. : Explanation in artificial intelligence: insights from the social sciences. Bias is to fairness as discrimination is to believe. Next, it's important that there is minimal bias present in the selection procedure. The research revealed leaders in digital trust are more likely to see revenue and EBIT growth of at least 10 percent annually. Jean-Michel Beacco Delegate General of the Institut Louis Bachelier. We cannot ignore the fact that human decisions, human goals and societal history all affect what algorithms will find.

Bias Is To Fairness As Discrimination Is Too Short

Therefore, some generalizations can be acceptable if they are not grounded in disrespectful stereotypes about certain groups, if one gives proper weight to how the individual, as a moral agent, plays a role in shaping their own life, and if the generalization is justified by sufficiently robust reasons. Introduction to Fairness, Bias, and Adverse Impact. This explanation is essential to ensure that no protected grounds were used wrongfully in the decision-making process and that no objectionable, discriminatory generalization has taken place. In this paper, we focus on algorithms used in decision-making for two main reasons. DECEMBER is the last month of th year.

It is essential to ensure that procedures and protocols protecting individual rights are not displaced by the use of ML algorithms. 2011) argue for a even stronger notion of individual fairness, where pairs of similar individuals are treated similarly. Taking It to the Car Wash - February 27, 2023. Alternatively, the explainability requirement can ground an obligation to create or maintain a reason-giving capacity so that affected individuals can obtain the reasons justifying the decisions which affect them. Kamiran, F., & Calders, T. Classifying without discriminating. This question is the same as the one that would arise if only human decision-makers were involved but resorting to algorithms could prove useful in this case because it allows for a quantification of the disparate impact. Murphy, K. : Machine learning: a probabilistic perspective. By definition, an algorithm does not have interests of its own; ML algorithms in particular function on the basis of observed correlations [13, 66]. If this computer vision technology were to be used by self-driving cars, it could lead to very worrying results for example by failing to recognize darker-skinned subjects as persons [17]. Bias is to fairness as discrimination is too short. In: Collins, H., Khaitan, T. (eds. ) Harvard University Press, Cambridge, MA (1971).

If a certain demographic is under-represented in building AI, it's more likely that it will be poorly served by it. In principle, sensitive data like race or gender could be used to maximize the inclusiveness of algorithmic decisions and could even correct human biases. Clearly, given that this is an ethically sensitive decision which has to weigh the complexities of historical injustice, colonialism, and the particular history of X, decisions about her shouldn't be made simply on the basis of an extrapolation from the scores obtained by the members of the algorithmic group she was put into. Standards for educational and psychological testing. The Washington Post (2016). Legally, adverse impact is defined by the 4/5ths rule, which involves comparing the selection or passing rate for the group with the highest selection rate (focal group) with the selection rates of other groups (subgroups). This can be used in regression problems as well as classification problems. First, we identify different features commonly associated with the contemporary understanding of discrimination from a philosophical and normative perspective and distinguish between its direct and indirect variants. Otherwise, it will simply reproduce an unfair social status quo. Algorithms may provide useful inputs, but they require the human competence to assess and validate these inputs. Bozdag, E. : Bias in algorithmic filtering and personalization. This may amount to an instance of indirect discrimination. 2009) developed several metrics to quantify the degree of discrimination in association rules (or IF-THEN decision rules in general).

Bias Is To Fairness As Discrimination Is To Believe

Which biases can be avoided in algorithm-making? Pianykh, O. S., Guitron, S., et al. Moreover, if observed correlations are constrained by the principle of equal respect for all individual moral agents, this entails that some generalizations could be discriminatory even if they do not affect socially salient groups. Despite these problems, fourthly and finally, we discuss how the use of ML algorithms could still be acceptable if properly regulated.

As mentioned above, here we are interested by the normative and philosophical dimensions of discrimination. Add your answer: Earn +20 pts. Second, however, this case also highlights another problem associated with ML algorithms: we need to consider the underlying question of the conditions under which generalizations can be used to guide decision-making procedures. Therefore, the use of ML algorithms may be useful to gain in efficiency and accuracy in particular decision-making processes. We are extremely grateful to an anonymous reviewer for pointing this out. Model post-processing changes how the predictions are made from a model in order to achieve fairness goals. However, we can generally say that the prohibition of wrongful direct discrimination aims to ensure that wrongful biases and intentions to discriminate against a socially salient group do not influence the decisions of a person or an institution which is empowered to make official public decisions or who has taken on a public role (i. e. an employer, or someone who provides important goods and services to the public) [46]. The justification defense aims to minimize interference with the rights of all implicated parties and to ensure that the interference is itself justified by sufficiently robust reasons; this means that the interference must be causally linked to the realization of socially valuable goods, and that the interference must be as minimal as possible. After all, as argued above, anti-discrimination law protects individuals from wrongful differential treatment and disparate impact [1]. AEA Papers and Proceedings, 108, 22–27. It's also important to choose which model assessment metric to use, these will measure how fair your algorithm is by comparing historical outcomes and to model predictions. Before we consider their reasons, however, it is relevant to sketch how ML algorithms work. 2017) propose to build ensemble of classifiers to achieve fairness goals. Fairness encompasses a variety of activities relating to the testing process, including the test's properties, reporting mechanisms, test validity, and consequences of testing (AERA et al., 2014).

Many AI scientists are working on making algorithms more explainable and intelligible [41]. Thirdly, we discuss how these three features can lead to instances of wrongful discrimination in that they can compound existing social and political inequalities, lead to wrongful discriminatory decisions based on problematic generalizations, and disregard democratic requirements. There is evidence suggesting trade-offs between fairness and predictive performance. If you hold a BIAS, then you cannot practice FAIRNESS. Noise: a flaw in human judgment.

Yang and Stoyanovich (2016) develop measures for rank-based prediction outputs to quantify/detect statistical disparity. 86(2), 499–511 (2019). Proceedings of the 30th International Conference on Machine Learning, 28, 325–333. How can a company ensure their testing procedures are fair? After all, generalizations may not only be wrong when they lead to discriminatory results. The White House released the American Artificial Intelligence Initiative:Year One Annual Report and supported the OECD policy.

Acclaimed Fx Drama About A Chicago Restaurant