fbpx
Wikipedia

Web archiving

Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. Web archivists typically employ web crawlers for automated capture due to the massive size and amount of information on the Web. The largest web archiving organization based on a bulk crawling approach is the Wayback Machine, which strives to maintain an archive of the entire Web.

The growing portion of human culture created and recorded on the web makes it inevitable that more and more libraries and archives will have to face the challenges of web archiving.[1] National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content.

Commercial web archiving software and services are also available to organizations who need to archive their own web content for corporate heritage, regulatory, or legal purposes.

History and development

While curation and organization of the web has been prevalent since the mid- to late-1990s, one of the first large-scale web archiving project was the Internet Archive, a non-profit organization created by Brewster Kahle in 1996.[2] The Internet Archive released its own search engine for viewing archived web content, the Wayback Machine, in 2001.[2] As of 2018, the Internet Archive was home to 40 petabytes of data.[3] The Internet Archive also developed many of its own tools for collecting and storing its data, including PetaBox for storing the large amounts of data efficiently and safely, and Heritrix, a web crawler developed in conjunction with the Nordic national libraries.[2] Other projects launched around the same time included a web archiving project by the National Library of Canada, Australia's Pandora, Tasmanian web archives and Sweden's Kulturarw3.[4][5]

From 2001 to 2010,[failed verification] the International Web Archiving Workshop (IWAW) provided a platform to share experiences and exchange ideas.[6][7] The International Internet Preservation Consortium (IIPC), established in 2003, has facilitated international collaboration in developing standards and open source tools for the creation of web archives.[8]

The now-defunct Internet Memory Foundation was founded in 2004 and founded by the European Commission in order to archive the web in Europe.[2] This project developed and released many open source tools, such as "rich media capturing, temporal coherence analysis, spam assessment, and terminology evolution detection."[2] The data from the foundation is now housed by the Internet Archive, but not currently publicly accessible.[9]

Despite the fact that there is no centralized responsibility for its preservation, web content is rapidly becoming the official record. For example, in 2017, the United States Department of Justice affirmed that the government treats the President’s tweets as official statements.[10]

Collecting the web

Web archivists generally archive various types of web content including HTML web pages, style sheets, JavaScript, images, and video. They also archive metadata about the collected resources such as access time, MIME type, and content length. This metadata is useful in establishing authenticity and provenance of the archived collection.

Methods of collection

Remote harvesting

The most common web archiving technique uses web crawlers to automate the process of collecting web pages.[5] Web crawlers typically access web pages in the same manner that users with a browser see the Web, and therefore provide a comparatively simple method of remote harvesting web content. Examples of web crawlers used for web archiving include:

There exist various free services which may be used to archive web resources "on-demand", using web crawling techniques. These services include the Wayback Machine and WebCite.

Database archiving

Database archiving refers to methods for archiving the underlying content of database-driven websites. It typically requires the extraction of the database content into a standard schema, often using XML. Once stored in that standard format, the archived content of multiple databases can then be made available using a single access system. This approach is exemplified by the DeepArc and tools developed by the Bibliothèque Nationale de France and the National Library of Australia respectively. DeepArc enables the structure of a relational database to be mapped to an XML schema, and the content exported into an XML document. Xinq then allows that content to be delivered online. Although the original layout and behavior of the website cannot be preserved exactly, Xinq does allow the basic querying and retrieval functionality to be replicated.

Transactional archiving

Transactional archiving is an event-driven approach, which collects the actual transactions which take place between a web server and a web browser. It is primarily used as a means of preserving evidence of the content which was actually viewed on a particular website, on a given date. This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information.[11]

A transactional archiving system typically operates by intercepting every HTTP request to, and response from, the web server, filtering each response to eliminate duplicate content, and permanently storing the responses as bitstreams.

Difficulties and limitations

Crawlers

Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling:

  • The robots exclusion protocol may request crawlers not access portions of a website. Some web archivists may ignore the request and crawl those portions anyway.
  • Large portions of a web site may be hidden in the Deep Web. For example, the results page behind a web form can lie in the Deep Web if crawlers cannot follow a link to the results page.
  • Crawler traps (e.g., calendars) may cause a crawler to download an infinite number of pages, so crawlers are usually configured to limit the number of dynamic pages they crawl.
  • Most of the archiving tools do not capture the page as it is. It is observed that ad banners and images are often missed while archiving.

However, it is important to note that a native format web archive, i.e., a fully browsable web archive, with working links, media, etc., is only really possible using crawler technology.

The Web is so large that crawling a significant portion of it takes a large number of technical resources. The Web is changing so fast that portions of a website may change before a crawler has even finished crawling it.

General limitations

Some web servers are configured to return different pages to web archiver requests than they would in response to regular browser requests. This is typically done to fool search engines into directing more user traffic to a website, and is often done to avoid accountability, or to provide enhanced content only to those browsers that can display it.

Not only must web archivists deal with the technical challenges of web archiving, they must also contend with intellectual property laws. Peter Lyman[12] states that "although the Web is popularly regarded as a public domain resource, it is copyrighted; thus, archivists have no legal right to copy the Web". However national libraries in some countries[13] have a legal right to copy portions of the web under an extension of a legal deposit.

Some private non-profit web archives that are made publicly accessible like WebCite, the Internet Archive or the Internet Memory Foundation allow content owners to hide or remove archived content that they do not want the public to have access to. Other web archives are only accessible from certain locations or have regulated usage. WebCite cites a recent lawsuit against Google's caching, which Google won.[14]

Laws

In 2017 the Financial Industry Regulatory Authority, Inc. (FINRA), a United States financial regulatory organization, released a notice stating all the business doing digital communications are required to keep a record. This includes website data, social media posts, and messages.[15] Some copyright laws may inhibit Web archiving. For instance, academic archiving by Sci-Hub falls outside the bounds of contemporary copyright law. The site provides enduring access to academic works including those that do not have an open access license and thereby contributes to the archival of scientific research which may otherwise be lost.[16][17]

See also

References

Citations

  1. ^ Truman, Gail (2016). "Web Archiving Environmental Scan". Harvard Library.
  2. ^ a b c d e Toyoda, M.; Kitsuregawa, M. (May 2012). "The History of Web Archiving". Proceedings of the IEEE. 100 (Special Centennial Issue): 1441–1443. doi:10.1109/JPROC.2012.2189920. ISSN 0018-9219.
  3. ^ "Inside Wayback Machine, the internet's time capsule". The Hustle. September 28, 2018. sec. Wayyyy back. Retrieved July 21, 2020.
  4. ^ Costa, Miguel; Gomes, Daniel; Silva, Mário J. (September 2017). "The evolution of web archiving". International Journal on Digital Libraries. 18 (3): 191–205. doi:10.1007/s00799-016-0171-9. S2CID 24303455.
  5. ^ a b Consalvo, Mia; Ess, Charles, eds. (April 2011). "Web Archiving – Between Past, Present, and Future". The Handbook of Internet Studies (1 ed.). Wiley. pp. 24–42. doi:10.1002/9781444314861. ISBN 978-1-4051-8588-2.
  6. ^ "IWAW 2010: The 10th Intl Web Archiving Workshop". www.wikicfp.com. Retrieved August 19, 2019.
  7. ^ . bibnum.bnf.fr. Archived from the original on November 20, 2012. Retrieved August 19, 2019.
  8. ^ "About the IIPC". IIPC. Retrieved April 17, 2022.
  9. ^ "Internet Memory Foundation : Free Web : Free Download, Borrow and Streaming". archive.org. Internet Archive. Retrieved July 21, 2020.
  10. ^ Regis, Camille (June 4, 2019). "Web Archiving: Think the Web is Permanent? Think Again". History Associates. Retrieved July 14, 2019.
  11. ^ Brown, Adrian (January 10, 2016). Archiving websites : a practical guide for information management professionals. ISBN 978-1-78330-053-2. OCLC 1064574312.
  12. ^ Lyman (2002)
  13. ^ "Legal Deposit | IIPC". netpreserve.org. from the original on March 16, 2017. Retrieved January 31, 2017.
  14. ^ "WebCite FAQ". Webcitation.org. Retrieved September 20, 2018.
  15. ^ "Social Media and Digital Communications" (PDF). finra.org. FINRA.
  16. ^ Claburn, Thomas (September 10, 2020). "Open access journals are vanishing from the web, Internet Archive stands ready to fill in the gaps". The Register.
  17. ^ Laakso, Mikael; Matthias, Lisa; Jahn, Najko (2021). "Open is not forever: A study of vanished open access journals". Journal of the Association for Information Science and Technology. 72 (9): 1099–1112. arXiv:2008.11933. doi:10.1002/ASI.24460. S2CID 221340749.

General bibliography

  • Brown, A. (2006). Archiving Websites: A Practical Guide for Information Management Professionals. London: Facet Publishing. ISBN 978-1-85604-553-7.
  • Brügger, N. (2005). . Aarhus: The Centre for Internet Research. ISBN 978-87-990507-0-3. Archived from the original on January 29, 2009.
  • Day, M. (2003). "Preserving the Fabric of Our Lives: A Survey of Web Preservation Initiatives" (PDF). Research and Advanced Technology for Digital Libraries: Proceedings of the 7th European Conference (ECDL). Lecture Notes in Computer Science. 2769: 461–472. doi:10.1007/978-3-540-45175-4_42. ISBN 978-3-540-40726-3.
  • Eysenbach, G. & Trudel, M. (2005). "Going, going, still there: using the WebCite service to permanently archive cited web pages". Journal of Medical Internet Research. 7 (5): e60. doi:10.2196/jmir.7.5.e60. PMC 1550686. PMID 16403724.
  • Fitch, Kent (2003). . Ausweb 03. Archived from the original on July 20, 2003. Retrieved September 27, 2006.
  • Jacoby, Robert (August 19, 2010). . Archived from the original on January 3, 2011. Retrieved October 23, 2010.
  • Lyman, P. (2002). "Archiving the World Wide Web". Building a National Strategy for Preservation: Issues in Digital Media Archiving.
  • Masanès, J.), ed. (2006). Web Archiving. Berlin: Springer-Verlag. ISBN 978-3-540-23338-1.
  • Pennock, Maureen (2013). Web-Archiving. DPC Technology Watch Reports. Great Britain: Digital Preservation Coalition. doi:10.7207/twr13-01. ISSN 2048-7916.
  • Toyoda, M., Kitsuregawa, M. (2012). "The History of Web Archiving". Proceedings of the IEEE. 100 (special centennial issue): 1441–1443. doi:10.1109/JPROC.2012.2189920.{{cite journal}}: CS1 maint: uses authors parameter (link)

External links

  • International Internet Preservation Consortium (IIPC)—International consortium whose mission is to acquire, preserve, and make accessible knowledge and information from the Internet for future generations
  • National Library of Australia, Preserving Access to Digital Information (PADI)
  • Library of Congress—Web Archiving


archiving, archive, redirects, here, other, uses, archive, disambiguation, process, collecting, portions, world, wide, ensure, information, preserved, archive, future, researchers, historians, public, archivists, typically, employ, crawlers, automated, capture. Web archive redirects here For other uses see Web archive disambiguation Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers historians and the public Web archivists typically employ web crawlers for automated capture due to the massive size and amount of information on the Web The largest web archiving organization based on a bulk crawling approach is the Wayback Machine which strives to maintain an archive of the entire Web The growing portion of human culture created and recorded on the web makes it inevitable that more and more libraries and archives will have to face the challenges of web archiving 1 National libraries national archives and various consortia of organizations are also involved in archiving culturally important Web content Commercial web archiving software and services are also available to organizations who need to archive their own web content for corporate heritage regulatory or legal purposes Contents 1 History and development 2 Collecting the web 3 Methods of collection 3 1 Remote harvesting 3 2 Database archiving 3 3 Transactional archiving 4 Difficulties and limitations 4 1 Crawlers 4 2 General limitations 5 Laws 6 See also 7 References 7 1 Citations 7 2 General bibliography 8 External linksHistory and development EditWhile curation and organization of the web has been prevalent since the mid to late 1990s one of the first large scale web archiving project was the Internet Archive a non profit organization created by Brewster Kahle in 1996 2 The Internet Archive released its own search engine for viewing archived web content the Wayback Machine in 2001 2 As of 2018 the Internet Archive was home to 40 petabytes of data 3 The Internet Archive also developed many of its own tools for collecting and storing its data including PetaBox for storing the large amounts of data efficiently and safely and Heritrix a web crawler developed in conjunction with the Nordic national libraries 2 Other projects launched around the same time included a web archiving project by the National Library of Canada Australia s Pandora Tasmanian web archives and Sweden s Kulturarw3 4 5 From 2001 to 2010 failed verification the International Web Archiving Workshop IWAW provided a platform to share experiences and exchange ideas 6 7 The International Internet Preservation Consortium IIPC established in 2003 has facilitated international collaboration in developing standards and open source tools for the creation of web archives 8 The now defunct Internet Memory Foundation was founded in 2004 and founded by the European Commission in order to archive the web in Europe 2 This project developed and released many open source tools such as rich media capturing temporal coherence analysis spam assessment and terminology evolution detection 2 The data from the foundation is now housed by the Internet Archive but not currently publicly accessible 9 Despite the fact that there is no centralized responsibility for its preservation web content is rapidly becoming the official record For example in 2017 the United States Department of Justice affirmed that the government treats the President s tweets as official statements 10 Collecting the web EditWeb archivists generally archive various types of web content including HTML web pages style sheets JavaScript images and video They also archive metadata about the collected resources such as access time MIME type and content length This metadata is useful in establishing authenticity and provenance of the archived collection Methods of collection EditSee also List of Web archiving initiatives Remote harvesting Edit The most common web archiving technique uses web crawlers to automate the process of collecting web pages 5 Web crawlers typically access web pages in the same manner that users with a browser see the Web and therefore provide a comparatively simple method of remote harvesting web content Examples of web crawlers used for web archiving include Heritrix HTTrack WgetThere exist various free services which may be used to archive web resources on demand using web crawling techniques These services include the Wayback Machine and WebCite Database archiving Edit Database archiving refers to methods for archiving the underlying content of database driven websites It typically requires the extraction of the database content into a standard schema often using XML Once stored in that standard format the archived content of multiple databases can then be made available using a single access system This approach is exemplified by the DeepArc and Xinq tools developed by the Bibliotheque Nationale de France and the National Library of Australia respectively DeepArc enables the structure of a relational database to be mapped to an XML schema and the content exported into an XML document Xinq then allows that content to be delivered online Although the original layout and behavior of the website cannot be preserved exactly Xinq does allow the basic querying and retrieval functionality to be replicated Transactional archiving Edit Transactional archiving is an event driven approach which collects the actual transactions which take place between a web server and a web browser It is primarily used as a means of preserving evidence of the content which was actually viewed on a particular website on a given date This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information 11 A transactional archiving system typically operates by intercepting every HTTP request to and response from the web server filtering each response to eliminate duplicate content and permanently storing the responses as bitstreams Difficulties and limitations EditCrawlers Edit Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling The robots exclusion protocol may request crawlers not access portions of a website Some web archivists may ignore the request and crawl those portions anyway Large portions of a web site may be hidden in the Deep Web For example the results page behind a web form can lie in the Deep Web if crawlers cannot follow a link to the results page Crawler traps e g calendars may cause a crawler to download an infinite number of pages so crawlers are usually configured to limit the number of dynamic pages they crawl Most of the archiving tools do not capture the page as it is It is observed that ad banners and images are often missed while archiving However it is important to note that a native format web archive i e a fully browsable web archive with working links media etc is only really possible using crawler technology The Web is so large that crawling a significant portion of it takes a large number of technical resources The Web is changing so fast that portions of a website may change before a crawler has even finished crawling it General limitations Edit Some web servers are configured to return different pages to web archiver requests than they would in response to regular browser requests This is typically done to fool search engines into directing more user traffic to a website and is often done to avoid accountability or to provide enhanced content only to those browsers that can display it Not only must web archivists deal with the technical challenges of web archiving they must also contend with intellectual property laws Peter Lyman 12 states that although the Web is popularly regarded as a public domain resource it is copyrighted thus archivists have no legal right to copy the Web However national libraries in some countries 13 have a legal right to copy portions of the web under an extension of a legal deposit Some private non profit web archives that are made publicly accessible like WebCite the Internet Archive or the Internet Memory Foundation allow content owners to hide or remove archived content that they do not want the public to have access to Other web archives are only accessible from certain locations or have regulated usage WebCite cites a recent lawsuit against Google s caching which Google won 14 Laws EditIn 2017 the Financial Industry Regulatory Authority Inc FINRA a United States financial regulatory organization released a notice stating all the business doing digital communications are required to keep a record This includes website data social media posts and messages 15 Some copyright laws may inhibit Web archiving For instance academic archiving by Sci Hub falls outside the bounds of contemporary copyright law The site provides enduring access to academic works including those that do not have an open access license and thereby contributes to the archival of scientific research which may otherwise be lost 16 17 See also Edit Internet portalAnna s Archive Archive site Archive Team archive today formerly archive is Collective memory Common Crawl Digital hoarding Digital preservation Digital library Google Cache List of Web archiving initiatives Wikipedia List of web archives on Wikipedia Memento Project Minerva Initiative Mirror website National Digital Information Infrastructure and Preservation Program NDIIPP National Digital Library Program NDLP PADICAT PageFreezer Pandora Archive UK Web Archive Virtual artifact Wayback Machine Web crawling WebCiteReferences EditCitations Edit Truman Gail 2016 Web Archiving Environmental Scan Harvard Library a b c d e Toyoda M Kitsuregawa M May 2012 The History of Web Archiving Proceedings of the IEEE 100 Special Centennial Issue 1441 1443 doi 10 1109 JPROC 2012 2189920 ISSN 0018 9219 Inside Wayback Machine the internet s time capsule The Hustle September 28 2018 sec Wayyyy back Retrieved July 21 2020 Costa Miguel Gomes Daniel Silva Mario J September 2017 The evolution of web archiving International Journal on Digital Libraries 18 3 191 205 doi 10 1007 s00799 016 0171 9 S2CID 24303455 a b Consalvo Mia Ess Charles eds April 2011 Web Archiving Between Past Present and Future The Handbook of Internet Studies 1 ed Wiley pp 24 42 doi 10 1002 9781444314861 ISBN 978 1 4051 8588 2 IWAW 2010 The 10th Intl Web Archiving Workshop www wikicfp com Retrieved August 19 2019 IWAW International Web Archiving Workshops bibnum bnf fr Archived from the original on November 20 2012 Retrieved August 19 2019 About the IIPC IIPC Retrieved April 17 2022 Internet Memory Foundation Free Web Free Download Borrow and Streaming archive org Internet Archive Retrieved July 21 2020 Regis Camille June 4 2019 Web Archiving Think the Web is Permanent Think Again History Associates Retrieved July 14 2019 Brown Adrian January 10 2016 Archiving websites a practical guide for information management professionals ISBN 978 1 78330 053 2 OCLC 1064574312 Lyman 2002 Legal Deposit IIPC netpreserve org Archived from the original on March 16 2017 Retrieved January 31 2017 WebCite FAQ Webcitation org Retrieved September 20 2018 Social Media and Digital Communications PDF finra org FINRA Claburn Thomas September 10 2020 Open access journals are vanishing from the web Internet Archive stands ready to fill in the gaps The Register Laakso Mikael Matthias Lisa Jahn Najko 2021 Open is not forever A study of vanished open access journals Journal of the Association for Information Science and Technology 72 9 1099 1112 arXiv 2008 11933 doi 10 1002 ASI 24460 S2CID 221340749 General bibliography Edit Brown A 2006 Archiving Websites A Practical Guide for Information Management Professionals London Facet Publishing ISBN 978 1 85604 553 7 Brugger N 2005 Archiving Websites General Considerations and Strategies Aarhus The Centre for Internet Research ISBN 978 87 990507 0 3 Archived from the original on January 29 2009 Day M 2003 Preserving the Fabric of Our Lives A Survey of Web Preservation Initiatives PDF Research and Advanced Technology for Digital Libraries Proceedings of the 7th European Conference ECDL Lecture Notes in Computer Science 2769 461 472 doi 10 1007 978 3 540 45175 4 42 ISBN 978 3 540 40726 3 Eysenbach G amp Trudel M 2005 Going going still there using the WebCite service to permanently archive cited web pages Journal of Medical Internet Research 7 5 e60 doi 10 2196 jmir 7 5 e60 PMC 1550686 PMID 16403724 Fitch Kent 2003 Web site archiving an approach to recording every materially different response produced by a website Ausweb 03 Archived from the original on July 20 2003 Retrieved September 27 2006 Jacoby Robert August 19 2010 Archiving a Web Page Archived from the original on January 3 2011 Retrieved October 23 2010 Lyman P 2002 Archiving the World Wide Web Building a National Strategy for Preservation Issues in Digital Media Archiving Masanes J ed 2006 Web Archiving Berlin Springer Verlag ISBN 978 3 540 23338 1 Pennock Maureen 2013 Web Archiving DPC Technology Watch Reports Great Britain Digital Preservation Coalition doi 10 7207 twr13 01 ISSN 2048 7916 Toyoda M Kitsuregawa M 2012 The History of Web Archiving Proceedings of the IEEE 100 special centennial issue 1441 1443 doi 10 1109 JPROC 2012 2189920 a href Template Cite journal html title Template Cite journal cite journal a CS1 maint uses authors parameter link External links EditInternational Internet Preservation Consortium IIPC International consortium whose mission is to acquire preserve and make accessible knowledge and information from the Internet for future generations National Library of Australia Preserving Access to Digital Information PADI Library of Congress Web Archiving Retrieved from https en wikipedia org w index php title Web archiving amp oldid 1145006468, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.