This Paper Introduces a Learning Method For Two-Layer Feedforwardneural Networks Based on Sensitivity Analysis, Which Uses a Linear Trainingalgorithm For Each of the Two Layers. First, Random Values Are Assigned to Theoutputs of the First Layer; Later, These Initial Values Are Updated Based Onsensitivity Formulas, Which Use the Weights In Each of the Layers; the Processis Repeated Until Convergence. Since These Weights Are Learnt Solving a Linearsystem of Equations, There Is an Important Saving In Computational Time. Themethod Also Gives the Local Sensitivities of the Least Square Errors Withrespect to Input and Output Data, With No Extra Computational Cost, Because Thenecessary Information Becomes Available Without Extra Calculations. Thismethod, Called the Sensitivity-Based Linear Learning Method, Can Also Be Usedto Provide an Initial Set of Weights, Which Significantly Improves the Behaviorof Other Learning Algorithms.