USA Banner

Official US Government Icon

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure Site Icon

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

U.S. Department of Transportation U.S. Department of Transportation Icon United States Department of Transportation United States Department of Transportation

Proposal Evaluation

Frequently Asked Questions

Q. What do I need to document when evaluating proposals?

A. The Best Practices Procurement Manual (BPPM), Section 5.4 - Documentation of Procurement Actions, contains guidance on how to meet the requirements of FTA Circular 4220.1F, which requires a written record of the procurement history. Among the items required are the reason for contractor selection or rejection, and the basis for the contract price. Grantees must perform a technical evaluation of competitive proposals and this evaluation needs to document the relative strengths and weaknesses of the proposals, together with the technical risks (if any) of the various approaches being proposed. This technical evaluation may include scoring the proposals numerically or using adjective ratings. The evaluators also must provide a written narrative of the reasons for their ratings. The official within the grantee’s organization who is making the selection decision needs to document the basis for the decision to select that offeror "whose proposal is most advantageous to the grantee’s program, with price and other factors considered." With respect to the contract price, grantees need to document the file to reflect the cost or price analysis performed on the winning proposal in order to make an affirmative determination that the price being paid is "fair and reasonable." The BPPM, Section 5.2 - Cost and Price Analysis, discusses some common cost and price analysis techniques. (Revised: September 9, 2009)

Back to Top


Q. If in an RFP, the evaluation criteria percentages are assigned as follows: price 45%, experience 20%, equipment/facility 15%, plan of action 15%, responsiveness to overall proposal 5%. If a vendor does not submit a plan of action at all, can this vendor be considered non-responsive?

A. If your RFP made the submission of a Plan of Action mandatory, a vendor that did not submit the required Plan of Action would be non-responsive to the RFP. (Revised: September 9, 2009)

Back to Top


Q. We received two proposals in response to our RFP. Proposal # 1 had a high price, and had an overall technical score of 80%. Proposal #2 was priced low, with an overall technical score of 56%. Is it permissible to approach proposal # 1 for a best and final offer, or do both proposers have to be approached? During the negotiation, can we tell the vendor that their price was higher than our budget?

A. The conduct of discussions with offerors is covered in the BPPM, Section 4.5.4 - Discussions and Clarifications, and the topic of best and final offers is covered in the BPPM, Section 4.5.5.2 - Request for Best and Final Offer. If discussion/negotiations are held, they should be held with all offerors in the competitive range; i.e., with all offerors that have a reasonable chance of winning the contract. If one of the two offerors has no reasonable chance of winning the contract, we would not advise you to conduct negotiations with that offeror. You should not request a best and final offer from a company that has no reasonable chance of being selected for the contract. During the negotiations you can tell the offeror that their price is outside the range of what you can afford to spend and you can attempt to negotiate a lower price. (Revised: September 9, 2009)

Back to Top


Q. I am working on an evaluation plan for an RFP, and I am trying to decide the formula to use for the technical and cost evaluations. Everyone I talk to uses a different formula. I have a percentage for technical and cost; however, some say the weights assigned to the technical evaluation factors should equal the percentage assigned to the technical percentage, and likewise for the cost. For example if you have 70 percent for technical, and 30 percent for cost, the technical evaluation factors should add up to 70 percent, but some say they should add up to 100 percent. Is there a formula, or a way the FTA preferred approach?

A. The FTA Best Practices Procurement Manual (BPPM) discusses proposal evaluations in Section 4.5.2 - Evaluation of Proposals. Following is an excerpt from that discussion concerning the scoring of technical and price proposals, which as you will see, recommends against assigning a numerical score to prices. It strongly recommends that grantees evaluate/score the technical proposals and then conduct a best value tradeoff comparing the technical strengths and weaknesses with the prices offered to determine the "Best Value."

Proposal Evaluation Mechanics

There are many different methods of conducting proposal evaluations to determine best value, and many opinions as to which is the best approach. Grantees may employ any rating method or combination of methods, including: color or adjectival ratings, numerical weights, and ordinal rankings. Whatever the method, the important thing is that a statement of the relative strengths, deficiencies, significant weaknesses, and risks supporting the evaluation ratings be documented in the contract file.

Some agencies have employed a quantitative approach of assigning scores to both technical and cost proposals, thereby compelling a source selection that is basically mathematically derived. Proponents of this method usually argue that it is the most "objective," and therefore the fairest, approach to determining a winner. On closer examination, however, all approaches are to one degree or another, subjective. The decision regarding what score to assign to any given factor is subjective, and any formulas employed after the initial scoring cannot make the process an "objective" one. Further, grantees must be allowed the flexibility of making sound, factually based decisions that are in their agency's best interests. We also believe that any approach that assigns a predetermined numerical weight to price, and then seeks to "score" price proposals and factor that score into a final overall numerical grade to automatically determine contract award, is a mistake. Rather, we believe that agencies should evaluate the prices offered but not score the price proposals. Prices should be evaluated and brought along side the technical proposal scores in order to make the necessary tradeoff decisions as to which proposal represents the best overall value to the agency. Agencies should carefully consider the technical merits of the competitors and the price differentials to see if a higher price proposal warrants the award based on the benefits it offers to the agency compared to a lower-priced proposal. This is a subjective decision-making, tradeoff process.

The difficulties in trying to assign a predetermined weight to price and then scoring price proposals is that no one is smart enough to predict in advance how much more should be paid for certain incremental improvements in technical scores or rankings (depending on what scoring method is used). For example, no one can predict the nature of what will be offered in the technical proposals until those proposals are opened and evaluated. Only then can the nature of what is offered be ascertained and the value of the different approaches proposed be measured. It is against the actual technical offers made that the prices must be compared in a tradeoff process. Agencies cannot predict in advance whether a rating of "Excellent" for a technical proposal will be worth X$ more that a rating of "Good," or whether a score of 95 is worth considerably more or only marginally more than a score of 87. It is what is underneath the "Excellent" and the "Good" ratings, or what has caused a score of 95 vs. a score of 87, that is critical. The goal is to determine if more dollars should be paid to buy the improvement, and equally important, how many more dollars those improvements are perceived to be worth. It could well be that the improvements reflected in the higher ratings are worth little in terms of perceived benefits to the agency. In this case the grantee does not want to get "locked in" to a mathematically derived source selection decision. This may very well happen when price has been assigned a numerical score and the selection is based on a mathematical formula instead of a well-reasoned analysis of the relative benefits of the competing proposals.

Some agencies have recognized the pitfalls of using arithmetic schemes to make source selection decisions. They have opted to not use numerical scores to evaluate technical proposals and they have gone to adjective ratings instead; e.g., "Acceptable," "Very Good," and "Excellent." They have also heavily emphasized the need for substantive narrative explanations of the reasons for the adjective ratings, and the Source Selection Official then focuses on the narrative explanations in determining if it is in the agency's best interest to pay a higher price for the technical improvements being offered. In this scenario, price is evaluated and considered alongside technical merit in a tradeoff fashion using good business judgment to choose the proposal that represents the best value to the agency.

(Reviewed: September 9, 2009)

Back to Top


Q. Please advise if FTA requires grantees to publish final scores from evaluations of RFPs.

A. FTA does not require grantees to publish competitive proposal evaluation data such as technical evaluation scores. However, you will need to check your state and local regulations in this regard to see what they might require. Some states such as Florida may require you to publish this kind of information under their so called "Sunshine" statutes. (Posted: May 2010)

Back to Top


Q. Typically, during the RFP development stage, Purchasing works with the Using department to develop the evaluation criteria, along with the guidelines for applying the criteria to the proposals, for the evaluation committee. If the User provides the criteria but insists on providing the guidelines after the RFP has been released, is this acceptable?

A. We believe your traditional method of developing concurrently both the evaluation criteria and the guidelines for applying the criteria to proposals is the best approach and should be continued. We do see a risk that the user may actually develop guidelines that, had they been known before releasing the RFP, would have changed either the criteria or the technical proposal instructions for submission of technical data. We see this as a possibility and recommend that you follow your original procedures; i.e., to develop (a) the criteria, (b) the guidelines for applying the criteria, and (c) the technical proposal instructions for inclusion in the RFP before the RFP is released. This will ensure that the criteria and the proposal instructions are exactly what you want them to be after thinking through the methodology of the entire evaluation process. (Posted: November, 2010)

Back to Top


Q. Can the identity of the Evaluation Team/Selection Committee be revealed to proposers after proposals have been submitted but before selection has been made?

Background Information: We just received proposals on Friday and on the following Tuesday, one of the proposers requested the names of the people involved in the Evaluation Team/ Selection Committee.

A. There is no FTA requirement or prohibition regarding the disclosure of agency officials who are evaluating proposals. However, we would recommend against disclosing this information as it could be used by an offeror to contact evaluation team members during the evaluation process, and this would clearly be something to be avoided in order to preclude the appearance of an impropriety that could undermine the integrity of the procurement process. We would also suggest that the names of evaluators not be revealed after award. The appropriate means of communicating with those companies who were not selected for award would be a formal debriefing by the Procurement Official (with assistance from technical personnel). The debriefing would disclose the strengths and weaknesses of the Offeror’s proposal in order to help the company be more competitive for future procurements. Individual agency officials should never be put in a position of having to defend their individual ratings of a proposal. What should be communicated to the company is the agency's final (organizational) assessment of the proposal. We understand that there are certain states that have "Sunshine" laws that may differ with our recommendation regarding disclosure of individual ratings. (Posted: January, 2015)

Back to Top


Q. Can a grantee evaluate the speed of a streetcar being acquired and assign a bonus weight of five percent to a speed of 50 mph when only one manufacturer can meet that speed? Would this be regarded as a restrictive specification?

A. Since the 50 mph is not a specification requirement this would not be a restrictive specification. The grantee can award more points during its evaluation of the various vehicles being offered based on the increased performance of this streetcar. (Posted: November, 2015)

Back to Top