Huber loss (English Wikipedia)

Analysis of information sources in references of the Wikipedia article "Huber loss" in English language version.

refsWebsite
Global rank English rank
2nd place
2nd place
26th place
20th place
4th place
4th place
1st place
1st place
179th place
183rd place
18th place
17th place
207th place
136th place
6th place
6th place
1,185th place
840th place

acm.org

dl.acm.org

archive.org

doi.org

  • Huber, Peter J. (1964). "Robust Estimation of a Location Parameter". Annals of Statistics. 53 (1): 73–101. doi:10.1214/aoms/1177703732. JSTOR 2238020.
  • Charbonnier, P.; Blanc-Féraud, L.; Aubert, G.; Barlaud, M. (1997). "Deterministic edge-preserving regularization in computed imaging". IEEE Trans. Image Process. 6 (2): 298–311. Bibcode:1997ITIP....6..298C. CiteSeerX 10.1.1.64.7521. doi:10.1109/83.551699. PMID 18282924.
  • Lange, K. (1990). "Convergence of Image Reconstruction Algorithms with Gibbs Smoothing". IEEE Trans. Med. Imaging. 9 (4): 439–446. doi:10.1109/42.61759. PMID 18222791.
  • Friedman, J. H. (2001). "Greedy Function Approximation: A Gradient Boosting Machine". Annals of Statistics. 26 (5): 1189–1232. doi:10.1214/aos/1013203451. JSTOR 2699986.

harvard.edu

ui.adsabs.harvard.edu

jstor.org

  • Huber, Peter J. (1964). "Robust Estimation of a Location Parameter". Annals of Statistics. 53 (1): 73–101. doi:10.1214/aoms/1177703732. JSTOR 2238020.
  • Friedman, J. H. (2001). "Greedy Function Approximation: A Gradient Boosting Machine". Annals of Statistics. 26 (5): 1189–1232. doi:10.1214/aos/1013203451. JSTOR 2699986.

nih.gov

pubmed.ncbi.nlm.nih.gov

psu.edu

citeseerx.ist.psu.edu

stanford.edu

statweb.stanford.edu

  • Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome (2009). The Elements of Statistical Learning. p. 349. Archived from the original on 2015-01-26. Compared to Hastie et al., the loss is scaled by a factor of 1/2, to be consistent with Huber's original definition given earlier. Though cute and elegant, the Huber loss serves almost no real purpose without scaling by a posteriori variable because the delta cannot be adjusted blindly and be effective; as such, its elegance and simplicity in a time of mathematical open field serves almost no purpose in the machine learning world.

web.archive.org

  • Hastie, Trevor; Tibshirani, Robert; Friedman, Jerome (2009). The Elements of Statistical Learning. p. 349. Archived from the original on 2015-01-26. Compared to Hastie et al., the loss is scaled by a factor of 1/2, to be consistent with Huber's original definition given earlier. Though cute and elegant, the Huber loss serves almost no real purpose without scaling by a posteriori variable because the delta cannot be adjusted blindly and be effective; as such, its elegance and simplicity in a time of mathematical open field serves almost no purpose in the machine learning world.