From Wikisource
(Redirected from Scriptorium)
Jump to: navigation, search
The Scriptorium is Wikisource's community discussion page. Feel free to ask questions or leave comments. You may join any current discussion or start a new one. Project members can often be found in the #wikisource IRC channel webclient. For discussion related to the entire project (not just the English chapter), please discuss at the multilingual Wikisource.


This section can be used by any person to communicate Wikisource-related and relevant information; it is not restricted. Generally announcements won't have discussion, or it will be minimal, so if a discussion is relevant, often add another section to Other with a link in the announcement to that section.

Identifying any Toolserver links[edit]

At the end of this month Toolserver ( //, supported by WMF-DE) will be shutting down, and all the tools are meant to have been migrated to Tool Labs (//, within WMF framework). Would all users please keep an eye out and check links for tools that you use on the site, and bring them to the attention of the community. Hopefully the tools have been migrated, otherwise we will have to have a mad scramble and get things moved. (PS. Not worried about anything that is sitting in a talk page archive.) Thanks. — billinghurst sDrewth 10:34, 4 June 2014 (UTC)

I am not sure if this is the sort of thing you are looking for, but your query prompted my curiosity and according to my browser logs this URL is being constantly polled (Probably once every page view? I have no real idea why, but the language selector is probably a bit of a hint too.):
AuFCL (talk) 11:33, 4 June 2014 (UTC)
Exactly the sort of thing. That is the "WhatLeavesHere" gadget, and I have pinged User:Krinkle to see if he is migrating the required component of his script. If it isn't migrated, we will retire the gadget. — billinghurst sDrewth 13:32, 4 June 2014 (UTC)
@AuFCL: this one should now be resolved. — billinghurst sDrewth 03:22, 6 June 2014 (UTC)
@sDruwth Agreed: no more activity to turning up here (in last half-hour or so.) The slot formerly occupied now appears to be going to instead. AuFCL (talk) 05:50, 6 June 2014 (UTC)
It should have always been going there, as that is the gadget. The gadget now should be calling "//" which is the replacement box, and the upgraded set of scripts. — billinghurst sDrewth 07:35, 6 June 2014 (UTC)
Has the Proofread Page Statistics tool been migrated? It is still linked from Help:Page status and possibly other pages. - AdamBMorgan (talk) 11:47, 6 June 2014 (UTC)
Yes under phetools at Toollabs:. Yes check.svg Donebillinghurst sDrewth 16:03, 6 June 2014 (UTC)

Pictogram voting comment.svg Comment — I've taken the list down to under 500 leaving us with mostly archived or talk-page instances of Toolserver usage/linkage for now, but there are at least 2 old toolserver account holders that really need to be ported over to wmflabs to retain consistency...

  • Inductiveload
  • ~vvv
...among a handful of possible others. You can see the current list HERE. -- George Orwell III (talk) 16:48, 8 June 2014 (UTC)
The linked list contains four unused toolserver links with my name. They served no purpose. Should I remove the links from the archives?— Ineuw talk 07:09, 25 June 2014 (UTC)
{{Anontools/ipv4}} & {{Anontools/ipv6}} both currently contain (non-functional) hard-coded references to // This deficiency ripples up through MediaWiki:sp-contributions-footer-anon to affect non-logged-in Special:Contributions use. unsigned comment by Snippy (talk) .
No obvious replacements for the links, so I hve just commented them out. Thanks for the notification. — billinghurst sDrewth 08:05, 14 July 2014 (UTC)

Vector skin: Thumbnail style update[edit]

There's an upcoming change to the thumbnail styling in Vector […]

The primary change is to remove the "box" border, which will bring the clean style that was recently added to <gallery> to all our thumbnails, plus consistency with our mobile view and the images on most Main Pages.

It will be arriving on non-Wikipedias [including English Wikisource] on August 12, and on Wikipedias on August 14, so there's additional time to acclimate. The design team will be available to discuss this, and other updates and ideas, with anyone interested at Wikimania in 2 weeks.

Please see documentation and details, at mw:Thumbnail style update

—Quiddity, Wikitech-ambassadors mailing list

Passing the information along. — billinghurst sDrewth 00:21, 23 July 2014 (UTC)
Thanks for the pointer - looks like they've changed some pieces that are already possible with {{FI}} & {{FIS}} though. -- George Orwell III (talk) 23:15, 23 July 2014 (UTC)


Automated import of openly licensed scholarly articles[edit]

The idea of systematically importing openly licensed scholarly articles into Wikisource has popped up from time to time. For instance, it formed the core of WikiProject Academic Papers and is mentioned in the Wikisource vision. However, the Wikiproject relied on human power, never reached its full potential, and eventually became inactive. The vision has yet to materialise. We plan to bridge the gap through automation. We are a subset of WikiProject Open Access (user:Daniel Mietchen, user:Maximilanklein, user:MattSenate), and we have funding from the Open Society Foundations via Wikimedia Deutschland to demo suitable workflows at Wikimania (see project page). Specifically, we plan to import Open Access journal articles into Wikisource when they are cited on Wikipedia. The import would be performed by a group of bots intended to make reference handling more interoperable across Wikimedia sites. Their main tasks are:

  • (on Wikipedia) signalling which references are openly licensed, and link them to the full text on Wikisource, the media on Commons and the metadata on Wikidata;
  • (on Commons) importing images and other media associated with the source article;
  • (on Wikisource) importing the full text of the source article and embedding the media in there;
  • (on Wikidata) handling the metadata associated with the source article, and signalling that the full text is on Wikisource and the media on Commons.

These Open Access imports on Wikisource will be linked to and from other Wikimedia sister sites. Our first priority though will be linking from English Wikipedia, focusing on the most cited Open Access papers, and the top-100 medical articles. In order to move forward with this, we need

  • General community approval
  • Community feedback on workflows and scrutiny on our test imports in specific.
  • Bot permission. For more technical information read our bot spec on Github.

We will have a Google hangout to answer any questions live on Sunday, June 15th 2014, at 6PM UTC. Please come and ask us questions. original link/new link

Daniel Mietchen (talk) 07:14, 10 June 2014 (UTC)

  • Interesting idea, all sounds very positive, is there any opposition or reasons for not doing this? Jeepday (talk) 11:00, 10 June 2014 (UTC)
What users are going to proofread and validate all of these texts? A bot can't do that. ResScholar (talk) 07:08, 11 June 2014 (UTC)
This is going to sound trite, but I am quite serious. If these works are truly imported "born-digital" efforts, then proofreading is going to amount to generating some kind of digital hash of the originating (website?) and of the imported copy; and validation will become someone (or bot) verifying the two calculations coincide. Presumably if they do not that triggers (the possibility of) a new edition? AuFCL (talk) 07:44, 11 June 2014 (UTC)
I think it's a very good idea, a serious step for the integration of the w:Open Access world with the Wikimedia one. Being born-digital content, I would support all kinds of "bot-hash" validation, but please do not tell me (somebody, in the past did) to upload the PDFs in the Proofread extension... :-D Aubrey (talk) 08:25, 11 June 2014 (UTC)
Yes, we are talking bout born-digital documents, so the proofreading effort should be vastly below normal by Wikisource standards. We cannot guarantee that it will be zero, though, despite having tested the pipeline from several angles. There could always be unusual ways of formatting in the sources that may cause problems with the import. Such cases would have to be handled as bugs (see the bug tracker), and once these are fixed, the text would have to be re-imported (and images or media re-inserted). Some sort of automated quality check is desirable, but simple hashes won't do, since the materials are converted from one flavour of XML (JATS) - with quite some inconsistencies - into another (MediaWiki XML). -- Daniel Mietchen (talk) 14:51, 11 June 2014 (UTC)
It's also worth noting that we are building both on-wiki and developer communities through en:w:WP:WikiProject Open Access and wpoa on github, providing a base for long-term sustainability of the project. In terms of proofreading and validation, we can also implement spot-checking and various other organic methods for general quality assurance on top of the bug-handling method Daniel mentioned above. Mattsenate (talk) 18:49, 12 June 2014 (UTC)
This is a great project, Daniel! A few observations: there is some data in the original that is not being brought over, including the copyright information, author information, article notes, and full citation information (URL, DOI, publication date, etc.). This shouldn't just be on a WikiProject (non-main namespace). I'm not sure if this all belongs in the header, but since there is index of page scans for bibliographic information, it should go somewhere in the article page.

There should potentially be a PLoS One page, and similar created for each journal, like the general Popular Science Monthly which links to each article. I am also curious, if you have some minimal amount of author metadata (as [1] does), could the bot create minimal author pages on Wikisource? (If so, could it also create a new Wikidata item for them/update an existing one with the Wikisource link?) Will the new Wikidata items for the journal articles indicate in some way that the texts are cited in a Wikipedia article (is that an existing property?) I also think that there should be some kind of a template for any article cited on Wikipedia, perhaps as an additional parameter in {{plain sister}}, instead of just being used for Wikipedia articles that are about the text. Dominic (talk) 15:03, 11 June 2014 (UTC)

@Dominic:, I like these suggestions. Yes, I think these should eventually go in the main namespace. As for organization, I don't mind having this done by Journal, or Publisher, or neither and just using Categories. We can do whatever the community desires on that front. As for author and other metadata, that information is available, and we can upload it to Wikisource no problem. We are not planning for Wikidata integration our first phase because Wikidata will not support arbitrary-item-lookup yet, but its in our long-term goals. Lastly, the fantasy of seeing which Wikipedia articles cite a given journal will be materialized in a hacky way, as we plan to publically expose the live database the bot makes of what-cites-what. Maximilianklein (talk) 18:36, 12 June 2014 (UTC)
fyi here is
Directory of Open Access Journals
lists freely accessible archives of serials
Slowking4Farmbrough's revenge 03:20, 12 June 2014 (UTC)
Based on our "hangout" video chat discussion, we would love some feedback on these motivations for the project:
Primary reasons to incorporate this content into WikiSource:
  • Signalling that a given reference has a mediawiki-marked-up copy on Wikisource is a clear indication that the source is actually Open Access (has a compatible license).
  • Providing full text, with in-context images, video, audio, and other media facilitates improving Wikipedia as a deep, rich, "free" as in "freedom" reference work.
  • Uploading source content including text, images, and other media closer to time of publication reduces the barrier to entry to cite academic works in Wikipedia and other Wikimedia projects.
Mattsenate (talk) 19:05, 15 June 2014 (UTC)
Did you mention what you intended to do about author pages? Charles Matthews (talk) 18:37, 20 June 2014 (UTC)
I received an invitation to comment, but it was not addressed to me. The invitation posted to my user talk page was addressed to Billinghurst. When the invitation is addressed incorrectly, and contains typographical errors, I'm not impressed. How will giving this job to a bot succeed if the initial proposal isn't even proofread and sent out correctly? Is this just a proposal to have a bot dump stuff here, or all there qualified and skilled people somewhere willing to ensure this works that way it should? Also, are they proposing to upload the images, video, and audio here? That shouldn't happen, as that's what Commons is for. --EncycloPetey (talk) 19:35, 20 June 2014 (UTC)
@EncycloPetey:, sorry I misaddressed your User talk message - I was sending out quite a few, and Special:MassMessage is admins only. In fact that quite highlights the need for automation in general, a bot wouldn't have got tired at the twentieth message, and made such a mistake. I hope we can address the proposal on it's own terms, not on my invitation-message-skills. In response to your questions, we are not 'just dumping' articles, we will first start with the most highly cited articles on English Wikipedia, and then link the references to Wikisource. So the articles will be receiving attention directed from English Wikipedia as well. Additionally, we will be using Commons for the images and videos, and our example articles already sport Commons-usage. Do you have any other concerns?
Maximilianklein (talk) 22:33, 20 June 2014 (UTC)
@Charles Matthews: the problem with starting author pages is that there is not yet a good system that would allow to (a) disambiguate between different authors spelled the same way (ORCID aims to solve that problem but has not been widely adopted yet) and (b) import information about the respective authors that goes beyond the name and perhaps affiliation. So it is not within scope for us at the moment. -- Daniel Mietchen (talk) 22:18, 20 June 2014 (UTC)
@Charles Matthews: +1 to what Daniel said. In this regard, we expect to utilize the "No author link" feature of Template:Header (or equivalent) to prevent the generation of author-page redlinks. Mattsenate (talk) 01:44, 21 June 2014 (UTC)
Well, one reason I asked is because the author page information we give here on Wikisource is a reasonably distinctive feature of this repository. Another is that I'm aware of ORCID. A third is that author pages here are now connected with Wikidata, and we should be thoughtful about that.
Is there any chance you could think further about the decision to "dump" papers here without serious author information, for example by developing a view on ORCID? A couple of points on this are (a) my own view that WS should be looking to differentiate itself by "adding value" to its content, not simply hosting it; and (b) the announcement above of a WS panel at Wikimania, for which this project could provide a debating point: should WS folk be pro-active in promoting addition of metadata, rather than "relaxed"? Charles Matthews (talk) 04:25, 21 June 2014 (UTC)
Questions and Comments—a) While hosting public domain scientific papers is certainly within our ambit, why do we need to host digital-native papers that are also hosted by their publishers? Is access to the papers in danger of being lost? b) I agree with Charles' concerns about not linking the authors. The main purpose of the "override_author" field is to allow for linking of multiple authors, and a secondary purpose is for authors that are unlinkable. It's not intended as a way to get away without linking authors. Beeswaxcandle (talk) 06:21, 21 June 2014 (UTC)
In response to Beeswaxcandle’s question A, "why should we host things that are hosted someplace else?". For the same reason we host works that are hosted by Project Gutenberg and/or Google Books. We are a library, not the library of things without another home. Jeepday (talk) 11:43, 21 June 2014 (UTC)
I don't know if we're a good place for these things, but access is always in danger of being lost. Sites that were assumed to be permanent disappear fairly frequently, and publishers are hardly reliable sources; if it's not economically valuable, they'll toss it to the side in a second, or just let it decay and not notice or care. Massive mirroring is the best way to prevent stuff from disappearing.--Prosfilaes (talk) 11:57, 21 June 2014 (UTC)
Other reasons to host these articles here are that citations from places like Wikipedia could actually link to a section or figure or other part of the source, rather than to the work as a whole, which makes it much easier to follow, understand and verify streams of argumentation (remember that we plan to import articles upon citation from Wikipedia). Plus, import of full articles here makes the images available for reuse across Wikimedia sites and beyond. Thousands of images from open access scholarly sources have been uploaded manually to Commons and reused from there, so why not support this process with a dose of automation? -- Daniel Mietchen (talk) 02:47, 22 June 2014 (UTC)

I like the idea and I think the author pages can be created via bot (in cases of authors with the same name they can be created manually, if bot creation is difficult, impossible, or problematic). Will the Wikidata item be classified as "work" or "edition"?--Erasmo Barresi (talk) 08:56, 21 June 2014 (UTC)

The journal articles would probably be classified as article (Q191067). -- Daniel Mietchen (talk) 02:47, 22 June 2014 (UTC)

If we don't have author pages, how will we link together two or more works by the same person? Also, +1 for using ORCID (I'm Wikipedian-in-Residence at ORCID, and can help with that). Pigsonthewing (talk) 09:41, 21 June 2014 (UTC)

We would be interested in creating the relevant author pages automatically if there is sufficient information. @Pigsonthewing can you take a look at the sample uploads and see what you can find out about the authors through ORCID's channels and how that could be automated? -- Daniel Mietchen (talk) 02:47, 22 June 2014 (UTC)
We don't want to circumvent policies but to improve reference management across Wikimedia sites and to facilitate the reuse of suitably licensed materials from scholarly sources. Volume 9 in the example above refers to the journal's content for an entire year, which has very little overlap with the content of that particular article, so there is no need to bring them over together. We are open to importing full issues or volumes of journals, but thought it better to filter in some way, preferably by usage on Wikipedia. Once the system would work as proposed, it could of course be extended to more comprehensive coverage of particular journals or publishers. -- Daniel Mietchen (talk) 02:48, 22 June 2014 (UTC)
How does what amounts to an unbacked, cherry-picked copy & paste improve reference management exactly? I understand you folks seek the ability to specifically target a line or lines in a work hosted here that supports the assertion or assertions being presented in a Wikipedia article, but then what? Devil's advocate says Wikisource is BS edited by BSers - without a hard copy backing up the article all you've done is introduce doubt rather than eliminate it completely. And when the entire body of a publication is available for inspection, its hard to argue the content has been tampered with at the same time. I'm not saying you need to proofread all 700 pages or whatever of a "Volume 9" into mainspace works for the sake of the 16 or so specifically needed either. -- George Orwell III (talk) 03:28, 22 June 2014 (UTC)
I don't want to "force" my opinion in a project (en.s) which I don't know well, but I would strongly suggest to weight pros and cons of this project. IMHO, this idea is fully into the scope and boundaries of Wikisource. Wikisource is a wiki digital library (hope we can agree on this) and it serves for access, referencing and linking of free texts. We are supposed to be a primary source, thus with reliable and accurate transcriptions of texts. The Proofread extension came 7/8 years ago and since it has change a lot the behavior of the WS communities, for better. But often it gives us the impression that everything needs to be backed up by scans and proofread and validated. We are talking about "born-digital" documents. I'm quite perplexed and confused by the idea that I need to treat a born-digital as it was made by paper. We have quite literally the "original copy" of the article (its XML source) and could see the PDF as a derivative work. If we put all these texts into the Proofread extension, we will lose the chance to have in a snap readable, reliable texts, and we will wait for ages for people patient enough to want to proofread a PDF.
I don't think it's worth it, for the sake of higher reliability, to sacrifice a ripe, sweet, low-hanging fruit. We already will have the PDFs on Commons, and are trying to set up a suitable workflow for all the data. We will earn readability, source for Wikipedia, increased readership, and maybe for the first time in the history of Wikisource we will be a real, up to date, and important source for Wikipedia. We will a real, important step in the integration and collaboration of Open Access and the Wikimedia world.
I'm sorry if I sound emotional, but this for me is a real breakthrough, and I would very much would like to see how it goes. And what we can accomplish with it. Aubrey (talk) 20:18, 22 June 2014 (UTC)
Your points are all well taken here but I still beg to differ. Being born-digital, I expect we'll have less in the way of correcting and more in the way of simple formatting (wiki-markup) than usual. I don't see what we are losing by treating all types of hosted works as uniformly as possible while moving forward. History has taught us works backed by scans/docs hold up far better than any Project Guttenburg text-dump ever could and its the only real way to preserve the fidelity of the work over the passage of time as well. You can't weasel-word your way into an article when a side-by-side, page-to-page comparison of the original is just 1 click away. - George Orwell III (talk) 21:05, 22 June 2014 (UTC)
What scares me is the fact that if we put all in the Proofread workflow we will have hundreds of Index pages with red links and 25% and 50% pages, and moreover to waste volunteers' energy and time for proofreading something that is worth it. I think is more important not to waste volunteers time than being over-cautious on the reliability of texts, if we have a fair system to recall the source. I understand that the "original" copy is important, but that would be still one-click away, on Commons. That is enough, for me (of course, MVHO). I'm also in favor of any kind of automated tool/workflow/procedure to check the source, or have it one-click away. Aubrey (talk) 09:18, 23 June 2014 (UTC)
History has taught us that, huh? I assume you're talking about Project Gutenberg, which has tens of thousands of works and has been around since the start of the Internet, providing hundreds of thousands of copies of their most popular works, and comparing it to a project that doesn't have nearly that many completed books and has only been around for a decade. Alice's Adventures in Wonderland (1866)/Chapter 1 has less than one hit a day, whereas Project Gutenberg has recorded 23,826 downloads of their edition since they started keeping track; even if that were over the entire 24 years of the web, that would still be 1,000 downloads a year, not counting copies sent out on floppy, data CD, DVD or the Librivox transcription.
Looking at PG, I see the producers care much more about the fine details then the end users. Looking at Wikipedia, it seems pretty clear that people are not continually stressed about Wiki changes, even the much harder to track environment of Wikipedia. Looking at the world, I see that treating competitors who are much larger and more successful then you as if they were nothing because they don't use your latest and greatest ideas comes off as hubris, and in the open source world will annoy people who work on both projects.--Prosfilaes (talk) 12:13, 23 June 2014 (UTC)
You assume too much. Project Guttenburg is an excellent repository - because no giblit can come in and edit the piece after the fact. This is contrary to current practice here, where once the immediate attention of producing the work disapates, the work is open for anyone at anytime. Tracking and reverting such stupidity is far easier when the work is backed by scans locally rather than having to go back to PG to verify the edit's fidelity. At that point, WS has become irrelevant - I might as well stick with PG altogether and not bother with WS at all.

And to be clear, I have no problem testing the waters by increasing traffic via this new program but it's implementatio is contrary to what we already know and practice. No 'devil's advocate' gives a fuck a work was copy and pasted from PG because they know its too much work to prove otherwise just the same as it will be with some XML backed copy & paste. One must plan for the lowest common denominator and hope that never happens. But if it does (like history tells us), we'll be proactively prepared to easily counter it. I just can't fully support the proposal without further discussion/consideration is all. -- George Orwell III (talk) 18:10, 23 June 2014 (UTC)

@User:George Orwell III Re "unbacked": We are talking about importing articles from PubMed Central (PMC), which is one of the world's largest repositories of digital copies of scholarly articles. These copies are supplied there directly by the publishers in a dedicated XML format (JATS) that they produce from the authors' accepted manuscript, alongside (and often as the source of) the HTML and PDF versions. Both the publisher and PMC check the quality of the XML in various ways. Plus, PMC has as solid a long-term archiving strategy as is currently possible, and most if not all of the relevant publishers are members of long-term archiving schemes like CLOCKSS. Hard copies are not normally part of these workflows (though most publishers produce some for long-term archival purposes), and I do not see the point in introducing them for importing these articles into Wikisource. Such XML-based workflows are markedly different from digitization-based ones as used at Project Gutenberg, where a digital copy of an article is obtained (usually much later and without access to the authors' manuscript) from scanning or photographing a paper copy, followed by optical character recognition, which remains an erroneous process and requires careful proofreading.
Re "cherry-picking": I fully agree with your statement "when the entire body of a publication is available for inspection, its hard to argue the content has been tampered with at the same time", and we had not even thought of not making the entire body of imported articles available for inspection here. As stated above, "we are open to importing full issues or volumes of journals, but thought it better to filter in some way, preferably by usage on Wikipedia" (and perhaps Commons), which happens on a per-article (or per-figure) base, rather than per journal issue or volume. On the other hand, being able to link to a specific section, figure, table or other part of a cited reference through Wikisource would reduce the potential for inappropriate references to be cherry-picked as references in, say, Wikipedia articles.
Re "copy & paste": that's kind of our point here - since we are essentially (save the format conversion between JATS and MediaWiki XML, which may introduce formatting errors) talking about a "copy & paste" workflow of the article content, the need for proofreading will be much less than for materials that come here through digitization-based workflows. To facilitate side-by-side comparisons, we are open to import the PDF too and to embed it into the Wikisource page.
Re "How does [the above] improve reference management exactly?": in the long run, we envisage our import tool to be triggered once a suitably licensed scholarly article is cited on any page in any Wikipedia. It would then
  1. import the images and associated media to Commons
  2. import the full text into the appropriate Wikisource
  3. create a Wikidata item for the scholarly article, with links to its materials on Commons and on Wikisource
  4. update the citation on Wikipedia with links to the materials on Commons and on Wikisource as well as the metadata on Wikidata.
This means that - once arbitrary access to Wikidata is implemented - the metadata for a scholarly article (along with pointers to Commons, Wikisource and Wikidata) would become instantly available for use in all Wikidata-integrated wikis, and could be curated on Wikidata. That would be a marked improvement over the current situation in which the metadata may be managed in multiple places on any given wiki, with little coordination across wikis.
We could even think of starting Wikidata items for all the references cited in that imported article, thus laying the ground for more comprehensive coverage of the literature on Wikidata. That would also provide a possibility for annotating the bibliographies of articles imported into Wikisource.
To get things started, we do not aim at all languages, all Wikimedia projects and all scholarly articles (nor their references or possible annotations) initially, but start with the English Wikipedia and with English-language articles that are openly licensed and available in a format that can be uploaded to the English Wikisource. -- Daniel Mietchen (talk) 10:47, 23 June 2014 (UTC)
@Daniel: "I do not see the point in introducing [hard copies] for importing these articles into Wikisource." Then you do not understand Wikisource's Best Practices. I would also say that you are much more optimistic about WikiData than I am. I was rather optimistic about WikiData, until it went active; the editors there either (a) do not understand what is going on at the individual projects and are not open to feedback from those projects, or (b) understand other projects but have been banned from those projects and now have control of the link data for the projects from which they were banned. --EncycloPetey (talk) 02:36, 29 June 2014 (UTC)
user:EncycloPetey user:George Orwell III What are "Wikisource's Best Practices" for born-digital works? Please stop reading here if I misunderstand you, but I think you are saying that Wikisource requires "hardcopy" versions to authenticate all works. "Born digital" works will not have hardcopies backing them up, and if I understand you correctly, you wish for uploaders of born digital works to artificially derive a new and original hardcopy version from that born digital work, then upload the hardcopy to Wikisource, and authenticate the born digital work against that. Is this what you are saying? Is this the process which would be most aligned with Wikisource practices?
What follows is more commentary on the availability of hardcopy versions: While this project is starting with some very well-funded digital publications like PubMed Central, many underfunded and now defunct open access journals, such as Zoologische mededelingen, have contributed thousands of images to thousands of Wikipedia articles, each reused in many languages now and potentially all languages in the future. This journal is not even registered with the usual DOI system, and even less so is there a "hardcopy" or metadata in order for it. I want authenticated versions also, but for many works, the publishers themselves do not authenticate what they publish in the sense that their works are subject to change without notice or indication and there might otherwise be no version control. Does Wikisource have policies in place for what sometimes could be archiving a webpage at a certain point in time? From the perspective of this project, the information to be collected is the best that exists for an academic field, but in a lot of cases, the policies for Wikisource seem to be requesting meta data which is ideal but does not exist in born-digital native formats. I think it is reasonable to suggest that Wikisource capture all extant information, but if I understand correctly, you wish to enforce a requirement that certain information must be provided without exception, and to block the Wikisource hosting of publications when the required information does not exist. What is the route to hosting a work with no "hard copy"? Should we artificially derive one from plain text, then authenticate our original text against its derivative? Blue Rasberry (talk) 12:32, 5 July 2014 (UTC)
Thanks @Bluerasberry: for your clear comment. I second your doubts. To me a partial solution would be to host the first version of the articles in, or even more trivially take the first version of the History of the page as the "original copy". I suggest everyone also to read this essay, because born-digital documents are documents nonetheless and in IMHO they should belong to Wikisource. We have the technical and social infrastructure to create a beautiful, interconnected and open access digital library, it would be a pity not tap this potential. Aubrey (talk) 15:36, 10 July 2014 (UTC)

Comments and questions: Good to see! I support the project and could help with routine review and validation if that's useful. Please comment on these flavors of "added value," and which ones belong on wikisource:

  • The footnotes and diagrams are great, and impressive! And, it's clear what goes onto wikisource and what goes elsewhere.
  • Hyperlinks to definitions -- when a medical research paper uses non-common terms, can we wikilink to definitions? Is that appropriate for the wikisource version of a PubMed paper?
  • The table of contents, I think, gets in the way of reading the content, and I would always want to collapse it or push it to the right, e.g. with {{TOCright}}. Do you have a standard in mind?
  • categories: I gather we can be inventive on wikisource. A medical specialist told me that the MeSH categories we can inherit from PubMed are not usually useful to him so there is maybe some possibility of helping here. I am not expert.
  • Commentary (my main question): User:Bluerasberry and I are working out a semantic-wiki project to hold human-curated relationships between scientific/scholarly/academics works. In the references you've got a key set of relationships, with "paper A cites paper B". It would be possible also to record that
  • paper B disputes the findings of paper A (examples, see report at bottom which lists other papers disputing the one described])
  • works A and B use the same data set or clinical trial (as in this example from WikiPapers which has papers-about-wikis and marks some as users of DBpedia)
  • that work B is said to be a literary or artistic adaptation of work A (example from literary adaptations wiki, see "works alluded to" and "publications talking about this work"),
  • or that work C makes claims about the relation of works A and B (example from same site; the infobox lists literary works about which it makes assertions)

This sort of added value can help scientists & scholars but is maybe not sufficiently neutral for wikimedia, or for wikisource anyway. Do you think that such commentary could belong on wikisource? Alternatively it could instead be in layers/sites somewhere else and could link to and depend on the wikisource hypertext you are developing. It would help our project to know what "layers" of value belong where, in the context of your project. Thanks for doing it! -- econterms (talk) 23:14, 28 June 2014 (UTC)

Thanks. We certainly see the import of articles from PMC into Wikisource as a potential starting point for annotations like those that you described, as well as for other kinds of enhancements (be this the simple addition of links to definitions, as you suggest, or some more complex features). As for MeSH-term based categories, we use them on Commons and would be fine using them here too. Likewise, we would be fine with collapsing or right-shifting the TOC, and we are open to similar formatting requests and will do our best to accommodate them if they can be automated at reasonable effort and quality. As the TOC illustrates, any added value can also be distractive (though usually for different groups of users), and the question of the right balance between the two is probably better addressed to the Wikisource community more widely than to us specifically. I guess that gadgets or external tools could be part of striking that balance, but in any case, these are not part of our project to import full-text scholarly articles. -- Daniel Mietchen (talk) 02:08, 29 June 2014 (UTC)
One more thought on annotations: I like the detailed approach by WikiProject Fieldnotes very much, but I am not sure how it fits with the requirements in Wikisource:Annotations, which stipulate, for instance, that the presence of annotations would have to be signaled in the page title. Extrapolating that to journal articles, I see potential for confusion if we end up having page names like Journal article on topic X as well as Annotation of Journal article on topic X. -- Daniel Mietchen (talk) 02:17, 29 June 2014 (UTC)
Ah! Thank you. The annotations policy is clear. Links to definitions would not be appropriate, but annotated versions can have them. The custom of making an /Annotated subpage is used several places and seems clear. (example) And opinion/commentary on the work is too non-neutral to be called annotation. -- econterms (talk) 02:51, 30 June 2014 (UTC)


So, there is consensus or there's gonna be a vote (I'm not even sure I can vote here on en.source :-) Aubrey (talk) 15:15, 25 June 2014 (UTC)


I propose for WS:Annotations to be formally recognized as policy. It is based on the 2013 Request for Comment about derivative works.--Erasmo Barresi (talk) 08:04, 12 July 2014 (UTC)

  • Symbol support vote.svg Support as proposer.--Erasmo Barresi (talk) 08:04, 12 July 2014 (UTC)
  • Symbol oppose vote.svg Oppose (very strongly) --EncycloPetey (talk) 08:19, 12 July 2014 (UTC)
  • Symbol oppose vote.svg Oppose -- Mukkakukaku (talk) 17:36, 12 July 2014 (UTC)
  • Pictogram voting comment.svg Comment I know this is late (though there doesn't seem to be much support, so it may need to be written), but I think we need to contact Wikibooks and work out what types of annotated books Wikimedia should cover (at least under our domains) and how to divy it up. b:Wikibooks:Annotated texts says we take some of them and I think we need policy, joint with them to an extent, as to whether we do take them or not and what exactly what do take.--Prosfilaes (talk) 21:26, 13 July 2014 (UTC)

Proposal withdrawn. I opened a Request for comment where the draft is split into parts so that it can be easily commented upon.--Erasmo Barresi (talk) 15:01, 15 July 2014 (UTC)


I propose for WS:Wikilinks to be formally recognized as policy. It is strongly tied with the annotation policy since it describes which kinds of wikilinks count as annotations and which do not.--Erasmo Barresi (talk) 08:04, 12 July 2014 (UTC)

  • Symbol support vote.svg Support as proposer.--Erasmo Barresi (talk) 08:04, 12 July 2014 (UTC)
  • Symbol oppose vote.svg Oppose (very strongly) --EncycloPetey (talk) 08:20, 12 July 2014 (UTC)
  • Symbol neutral vote.svg Neutral/Symbol support vote.svg Support (weakly) except for all the bits about 'annotations'. Which is to say, I think it's a good idea to solidify policies about what should be or should not be linked (I thought these policies already existed), but I don't agree with the tie-in to the annotations policy suggestion (which I don't support). --Mukkakukaku (talk) 18:06, 12 July 2014 (UTC)

Proposal withdrawn. See above.--Erasmo Barresi (talk) 15:01, 15 July 2014 (UTC)

BOT approval requests[edit]

Reconfirm User:Cswikisource-bot[edit]

"Bot flag will be reconfirmed automatically unless; if at least three established users oppose with no users supporting, then the right will be removed; three or more oppose and one or more support this triggers a vote, with a decision by simple majority. Loss of flag does not prevent edits, only impacts recent change visibility."


Bot Username Tasks Last Confirmation Next Confirmation Status
User:Cswikisource-bot (in semi-manual mode) for interwiki changes. Feb 2012 granted bot flag 2014, July Inactive; last edit Jan 2013
  1. No, well not on the current circumstances of interwiki now residing in WD. Happy to hear from Milda if there are other circumstances to consider. — billinghurst sDrewth 12:02, 2 July 2014 (UTC)
  2. No - Interwiki management should now be handled through Wikidata. -- George Orwell III (talk) 21:32, 4 July 2014 (UTC)
  3. No. Interwiki procedure is changing, and has changed a great deal since the approval. We'd need to see an appropriately modified bot process and approve that. --EncycloPetey (talk) 21:50, 4 July 2014 (UTC)

Reconfirm User:LA2-bot[edit]

"Bot flag will be reconfirmed automatically unless; if at least three established users oppose with no users supporting, then the right will be removed; three or more oppose and one or more support this triggers a vote, with a decision by simple majority. Loss of flag does not prevent edits, only impacts recent change visibility."


Bot Username Tasks Last Confirmation Next Confirmation Status
User:LA2-bot Create raw OCR pages Feb 2012 granted bot flag 2014, July Inactive; Last edit March 2012
  1. I have no issue with the bot right for the purpose, if @LA2: is still receiving requests for the bot. So yes, if user confirms that request will be still be taken. If no response from the bot operator, then there is no point in the bot maintaining the right, and we can remove it, though allowing for a quick reinstatement if the operations are again activated. — billinghurst sDrewth 12:02, 2 July 2014 (UTC)
  2. No - Unless User:LA2 proactively asks to keep the flag & demonstrates how the BOT will be used moving forward. -- George Orwell III (talk) 21:32, 4 July 2014 (UTC)
  3. No - As per GO3 & Billinghurst. Jeepday (talk) 00:54, 22 July 2014 (UTC)
  4. No - I see no reason to keep an open flag for an inactive bot that was only used for one month. --EncycloPetey (talk) 04:04, 22 July 2014 (UTC)
    • LA2 has little activity here recently, still active at commons so I left a note on his talk page there. JeepdaySock (AKA, Jeepday) 14:45, 18 July 2014 (UTC)


Other discussions[edit]

Constant script errors[edit]

Ever since the jQuery update and its console tracking, I've been getting script errors. They don't typically cause a crash but they do seem to screw up loading/caching.

The most common error message is: Expected identifier
and it points to: targetFn.super=originFn
in something starting with var targetConstructor=targetFn.prototype.constructor;

The announcement message (up top) said to list issues in the "Help" section - so here it is. I'm not sure anybody involved with this part of the code will see this however. -- George Orwell III (talk) 02:18, 22 June 2014 (UTC)

Found the cause with a fix in Bugzilla: 67565. I sure wish they would patch it asap rather than next week -- George Orwell III (talk) 19:04, 6 July 2014 (UTC)
How odd that Bug id: 67404 seems to have been addressed (the latter doesn't appear to be happening to me at least for a couple of days now) and yet this one is apparently resolved but still left for deferred implementation? Squeakier wheels perhaps? AuFCL (talk) 22:25, 6 July 2014 (UTC)

Interwiki links[edit]

It seems interwiki links have not been working, e.g. wːPhilosophy. Heyzeuss (talk) 14:23, 29 June 2014 (UTC)

They work fine. You are using ː (IPA triangular colon) instead of a : (normal colon) is all. Ex. - w:Philosophy -- George Orwell III (talk) 15:28, 29 June 2014 (UTC)

Tech News: 2014-27[edit]

06:53, 30 June 2014 (UTC)

Attention template developers: Changes to entity reference handling in #ifeq and #switch[edit]

Just to let you all know there is a small change to how the #ifexist: and #switch parser functions work.

Previously entity references (& > " etc.) were considered different from the characters they represented. For example:

{{#ifeq:&|&|the same|different}} outputted "different"

This has changed so that they are now considered the same. In particular, this means that pages with certain special characters (ie. * ' " = ;), will now have {{PAGENAME}} "equal" to the actual page. For example, on a page named "*foo"

{{#ifeq:{{PAGENAME}}|*foo|the same|different}} used to output "different", will now output "the same".

Change goes live on testwiki/ on the July 3, non-wikipedia projects July 8, Wikipedia on July 10. You can test right now

—bawolff, Wikitech-ambassadors mailing list

Thanks to @Bawolff: for his general notification to communities that I am sharing above. — billinghurst sDrewth 10:42, 1 July 2014 (UTC)

...and the last point in the newsletter above..
  • You will soon be able to use {{!}} as a magic word to produce the pipe character, for instance for use in tables. [21]
... is kind of related when it comes to template development. I don't believe there will be any issues with the switch to a formal magicword but I figure best to highlight the change just in case I'm wrong. -- George Orwell III (talk) 19:18, 1 July 2014 (UTC)
We will need to delete the template, and we should put a pertinent note in place. Though maybe some of the developers will do a global removal as that could be considered a universal template. — billinghurst sDrewth 00:45, 2 July 2014 (UTC)
Might I strongly recommend deferring any considerations of deleting {{!}} until after such time as the change is "officially" implemented (at which point all being well the template will no longer be eligible for transclusion—i.e. hopefully the parser will be smart enough to preferentially expand the magic word before the template)?

Then only should the template be cautiously modified to "prove" it is no longer effective/necessary; and finally only then delete it (or not? will anybody really care at that stage?)

Absolutely no sense in exposing WS to failure conditions on the mere promise of a change? AuFCL (talk) 06:56, 2 July 2014 (UTC)

Right; the more I think about it the less I'm inclined to think the transition will be a smooth one in all cases. And with the July 4 U.S. Holiday typically causing a "week off" for developers, we won't see any of this for at least 1 week from this past Tuesday at best.

I'm left wondering why they just didn't make the pipe symbol itself {{|}} a magicword and leave poor old exclamation point completely out of the mix once and for all. Sure, serious bot work would probably be needed after the change but its a small price to pay to recover from what started out as an ugly hack to begin with. -- George Orwell III (talk) 07:53, 2 July 2014 (UTC)

Cheap shot warning…
…because Heaven fore-fend somebody interfere with clever usage of parameterless nameless templates?
AuFCL (talk) 09:21, 2 July 2014 (UTC)
If you are going to use logic GOIII, you will surely be run out of town. To AuFCL, sure to deletion with testing. It was more a note for us to do and see, rather than having it done by SKS as a blind global deletion. — billinghurst sDrewth 11:23, 2 July 2014 (UTC)
I am told that {{|}} would not work as the pipe isn't a valid page title character, so it's not a valid magic word thing either. — billinghurst sDrewth 11:26, 2 July 2014 (UTC)
Cripes, I was teasing guys. I did not at all intend the last comment to be taken seriously, and I get (and pretty much agree with) Billinghurst's point regarding the first. AuFCL (talk) 12:02, 2 July 2014 (UTC)
Ah - now that makes some sense. At any rate, the switch is on for somepoint on this coming Tuesday (no week off for developers) in 1.24wmf12. That build is already up on the testbed (see HERE for a template using it). I can't find any instance of "trouble" between original & current usage - though the testbed is kind of lame when it comes to actual "works" to verify that 100%. I guess we'll find out next week & go from there. -- George Orwell III (talk) 00:19, 4 July 2014 (UTC)
The idiotic protection over on test2 won't let me test this, but I would be happier if the template were deliberately damaged (say: by adding prefix {{deprecated}}) so that we don't have to take on faith the "Templates used on this page:" section is really telling the whole truth. Yes, I am a suspicious … too. AuFCL (talk) 02:40, 4 July 2014 (UTC)

You brought up a good point there; the change took place but thanks to caching(?), usage still showed (for the most part) the ! template still in use. Only a null edit (a save without editing anything) refreshed the page to reflect the correct status/usage. This led me to think to use the API to purge the ! template instead and viola - the transcluded usage of the template went from in the hundreds down to less than 50.

So when the change takes place here, "somebody" needs to execute...

... to properly reflect usage under the new condition. Doing this purge prematurely, however, may or may not suck in the long run if some template or string of templates goes bad as result of the switch (e.g. there won't be any "What links here" listing to fall back on for troubleshooting, etc. at that point). -- George Orwell III (talk) 04:13, 4 July 2014 (UTC)

Honesty compels me to admit that wasn't quite what I was railing against (I was annoyed by the staged-release mechanism making short, relatively painless change-test-revert cycles impractical) but nevertheless glad if I could trigger a useful result. AuFCL (talk) 04:45, 4 July 2014 (UTC)
A bit of a late thought: whilst Special:WhatLinksHere/Template:! works now, it won't after the change goes through. Anybody thought about capturing the current list as a starting point for the bot run later? AuFCL (talk) 02:12, 5 July 2014 (UTC)
Special:MostTranscludedPages says the template is used on 126,399 pages! - I wouldn't know how to save all that or even if its worth saving. I did save "what links here" only for Template namespace thinking those are where trouble is likely to originate (if at all). I also edited all the other linkage for the template except of course the usage on this page to reflect plain-text instead. -- George Orwell III (talk) 04:31, 5 July 2014 (UTC)
Eek! I knew it would be a lot (I was thinking something just over 50,000) but that is rather frightening. Put it down as just a silly idea. AuFCL (talk) 06:17, 5 July 2014 (UTC)
Template:! is no longer transcluded anywhere, and is no longer linked from anywhere except here. Special:MostTranscludedPages is cached and hasn't updated yet; Special:WhatLinksHere/Template:! is the authoritative source. The template is ready for deletion. If anyone is worried about breakage, let me know what's necessary to relieve those worries. Jackmcbarn (talk) 04:29, 9 July 2014 (UTC)

Old work needs some modernising[edit]

I am doing some disambiguating, and find the work Index:Oxford Book of English Verse 1250-1918.djvu using some methodologies that are out of date and disrupt the display of the work to our current standards. Example Page:Oxford Book of English Verse 1250-1918.djvu/45 and its transclusion to Piers Plowman (Langland, Oxford Book of English Verse 1250-1918) where the references are old style, and do not display clearly as notes when transcluded. The sections don't include the name of the work being transcluded. So to me, a reasonable way to approach an update would be to include the headings, convert the notes to refs (possibly after the name of the work, which is different from the original display), and to add the relevant references to the required transcluded works in main namespace. So if there is anyone who would like to work on fixing it, it is there for the offing, otherwise, I will get to it some time in the next few weeks, once I have finished other tasks. — billinghurst sDrewth 00:40, 2 July 2014 (UTC)

Video and audio transcripts?[edit]

There is discussion of making somewhere transcripts of WMF audio and video available. All input is more than welcome at wikipedia:Wikipedia talk:Wikipedia Signpost#Transcripts of audio interviews. John Carter (talk) 16:41, 3 July 2014 (UTC)

I have made comment that I thought that it fits within the overall content that Wikisource includes, though one of our colleagues was less certain. To me it fits within the overall principle of published works, and if pre-1923, we would include, like we have done for some early movie files that we have transcribed. That said, our colleague is correct that WS:WWI is somewhat vague on post-1922 works, and if we look a little harder, it is truly vague on contemporary works (multimedia +) released on a free licence that don't fit within the print-world model. My overarching question to myself was that a transcription of the video should occur, and if it is to be hosted within WMF, then where, and to me it is clear that it is our responsibility to have such works clearly within scope where they are published and meet our licensing requirements. — billinghurst sDrewth 04:43, 4 July 2014 (UTC)
[As Devil's Advocate]: Part of the discussion also considers the idea of having translated captioning in several languages. My experience is that, if it's a multilingual work, then the community doesn't want it here, preferring it to reside in the no-man's-land of the multilingual wikisource. I do not agree with this approach, but that is my experience. The question of how to handle such captioning is a very real question, since we have to address the desire of multiple language speakers to see such a video and read its transcripts. Wikisource (not just this one) is very ill-equipped to handle multilingual media; Commons does it already, and also handles video and audio files well. --EncycloPetey (talk) 18:22, 5 July 2014 (UTC)
We are the English version, and for works that are broadly multilingual, mulWS has always be the home. I haven't seen an impetus for change, here or on the mailing list <shrug>. Video captioning of a work, is a separate issue to us hosting a transcription, as a transcription is a derivative of the original work, and both should exist at Commons. People can take a hosted transcription and do as they please with it if the licence allows it. I would agree that the derivative work aspects are for Commons, that doesn't separate us from the original transcription of a published work. To be clear, I don't see that we are doing more than hosting the English language transcription. — billinghurst sDrewth 03:17, 6 July 2014 (UTC)
In this case, however, the only place the work was "published" was at Commons. Are we willing to extend what we host to transcripts of files that were only published on Commons? --EncycloPetey (talk) 13:18, 8 July 2014 (UTC)
[cont'd]And were such a file here, then the question becomes how to interface, catalog, and curate such an animal. Part of the impetus behind the Signpost discussion was the dissatisfaction with the problems of finding such files on Commons. We have no mechanism for cataloging such files, and frankly our categorization of what we already have is so poor that I can't imagine it being easier to find a particular interview here as opposed to finding it on Commons. Couple with that the idea of hosting original content not published elsewhere, and of having translated captions in multiple languages, and you've got an idea that's dead before it ever gets started. --EncycloPetey (talk) 18:22, 5 July 2014 (UTC)
Portals, wikiprojects, authors … numbers of ways. They are issues, not insurmountable, and like anything else, drive us to improve. The original content is the audio, or the audio-visual and that is what is published, we are transcribing a published work in its original language. We would also be using something like {{listen}} as I did with Sumana'a recent works, and we have done for works that Theornamentalist did in Category:Film and other works like Category:Works with videos‎. It is hardly different to some of the transcripts of US presidential speeches that we host which have been done from recordings. There are clearly new things to consider, and some changes that we may need to make, with all that in mind, the transcription of a published work falls within our remit. — billinghurst sDrewth 03:17, 6 July 2014 (UTC)
I don't particularly see it as in scope. Would we keep these if they were from TED or Apple? If not, then I would say they belong on, not here.--Prosfilaes (talk) 02:50, 8 July 2014 (UTC)
I don’t have an issue with the idea of expanding our scope. We are a library, and it least in my experience library's house all types of consumable media. There remain several hurdles to this proposal that would need to be addressed prior to expanding the scope to include these works. WS:WWI needs to accurately describe what is in scope, we are not a vanity press, and I don't expect anything to change that. Also the whole catigorization thing, which presumably would follow the WWI discussion, or at least occur in tandem. JeepdaySock (AKA, Jeepday) 10:31, 8 July 2014 (UTC)
I think Wikimedia projects do better when they separate out Wikimedia stuff and non-Wikimedia stuff. If we take this type of stuff, we should make it under non-Wikimedia specific rules.--Prosfilaes (talk) 18:06, 8 July 2014 (UTC)
Jimbo has indicated that he doesn't think the WMF would care which WMF entity hosts transcripts, so long as one of them does. As an individual, I think that the fact that there are probably a few radio broadcasts that could reasonably be transcribed, including both fiction and news or other nonfiction, at least some of which will be as or more significant as some of the works of the same basic type of several already included here, this site would be the logical place to host them all. John Carter (talk) 14:54, 9 July 2014 (UTC)

British Library books[edit]

Hi all,

A year or two ago I posted about the British Library's nineteenth-century book collections - these were sort-of-available for distribution at the time, but in a very convoluted way involving my having to get them and upload them without a clear index available. Time has moved on, and now:

a) the electronic versions are available online (hurrah!) without having to be inside the BL network;
b) there is, sort of, an accessible search engine restricted to the digitised books;
c) the books have been used to generate the (remarkable) million-image Mechanical Curator collection

Not all plain sailing, though. First, the method originally used to generate the PDFs does not work well with MediaWiki - page images don't display properly - and so they need to be converted to djvu before uploading. (An example, with image-description tags, is here)

Secondly, searching for them is still a little clunky. They can be found using the main catalogue search, then refining the access options to "online" and the format to "book". (This rules out any on-site resources) The result is a strange combination of publicly-accessible government/EU documents and pre-1900 public-domain books available for download as PDFs...

Anyway, have a dig around! Some good material in there even if it's not amazingly well catalogued. Andrew Gray (talk) 21:52, 4 July 2014 (UTC)

. . . Phew, (wiping the sweat off my eyebrows), I was worried that there are no illustrations left for me to clean and upload to the commons.— Ineuw talk 09:36, 5 July 2014 (UTC)
@Andrew Gray:Great news, thanks for bringing it to us. I will add the link to Wikisource:Sources, probably with a little pointer to this text. — billinghurst sDrewth 14:36, 5 July 2014 (UTC)

Possible topical WikiProjects?[edit]

So far as I can tell what WikiProjects which have been created here have tended to be on specific works, not broader topics, like those at wikipedia. Has there ever been an attempt to create such projects here? I imagine one thing they might be able to do is develop content on "classic" or widely useful PD works, which could also be used in developing content at wikipedia and the other WMF entities. John Carter (talk) 17:18, 5 July 2014 (UTC)

@John Carter: WikiProjects have followed people's interests. There are plenty of projects that cover broader topics … Special:PrefixIndex/Wikisource:WikiProject though without critical mass they simple wax and wane. — billinghurst sDrewth 03:00, 6 July 2014 (UTC)
I was somewhat more thinking along the lines of having maybe 2 to 3 dozen WikiProjects on broad topics, like history, military history, philosophy, religion, visual arts, music, and similar, with maybe an equivalent number of "geographial" projects. They might even have similar naming to similar WikiProjects at wikipedia to maybe help encourage and bring more editors from there to here, and maybe be between them broad enough in scope to cover most everything we see here. The majority of the project page might just be a list of relevant extant works or indexes here, with maybe a list of relevant articles/pages from Britannica and the like. But I very much think making it easier for the numerous wikipedia editors to find something useful and of interest to them here would probably help both entities. John Carter (talk) 18:38, 6 July 2014 (UTC)
Have you had a look at the way we use Portals here? It's different to enWP and is probably more akin to what you're thinking. Beeswaxcandle (talk) 05:10, 8 July 2014 (UTC)
Special:PrefixIndex/Portal:, though we haven't coordinated works in such a place, which would be due to the lower numbers of participants. It you add works to those pages, even if just names of the works, then we would ask that you add them to author pages too. If a work is uploaded to Commons, and the index page is started then we have {{small scan link}} which indicates that the work is available to participants. If you have identified a scan of a work off-site then we have {{external scan link}} that can be used. — billinghurst sDrewth 09:53, 8 July 2014 (UTC)
I wasn't thinking of adding any, just listing those that already exist in one central location for easier access. John Carter (talk) 22:10, 8 July 2014 (UTC)

United Nations Security Council Resolution category names[edit]

I noticed that all United Nations Security Council Resolution documents have "United Nations" spelled out in the title, but Category:UN Security Council Resolutions and its sub-categories (like Category:UN Security Council Resolutions in 2000) do not. They are all spelled out in the corresponding Wikipedia categories, and I would think our practice here would be to have them spelled out also. Can we do a mass move? I'd be glad to make the fixes on the affected pages if the categories are renamed, or this could all be done by bot. Cheers! BD2412 T 00:35, 6 July 2014 (UTC)

I don't see that it is really necessary. Wikidata them, and that should align them. Xwiki there is variation after variation in naming of articles and categories. — billinghurst sDrewth 02:56, 6 July 2014 (UTC)
Aren't things like that usually spelled out on this wiki? It is not just an interwiki inconsistency, but an internal inconsistency here, since all of the actual document page titles spell out "United Nations", and several Wikisource categories also spell it out. BD2412 T 16:08, 6 July 2014 (UTC)

Tech News: 2014-28[edit]

07:07, 7 July 2014 (UTC)

Please do not attack the admins[edit]

The following discussion is closed and will soon be archived: No idea what is going on here, but it is not constructive and this does not seem to be the place for discussing what ever it is. JeepdaySock (AKA, Jeepday) 19:17, 9 July 2014 (UTC)

Index transition between pages[edit]

Anyone have any good ideas how to best transition between pages 253 and 254 so that it translates better in the Mainspace? Thanks, Londonjackbooks (talk) 18:52, 9 July 2014 (UTC)

Maybe wrap the continuance in a <noinclude>, or put it in the Page: header so it doesn't transclude? Eg the very first item in the index on page 254 that says "Department of Propaganda in Enemy Countries—cont." which is just a continuation of the item/header introduced in the previous section? Unless there was something else you were concerned about and I'm just being willfully blind at the moment... Mukkakukaku (talk) 02:05, 10 July 2014 (UTC)
Thanks. Not sure how to explain it, but the formatting of the text also seems to shift between pages (paragraph style, etc.). I'm not sure how to tie the two sections together. Londonjackbooks (talk) 02:18, 10 July 2014 (UTC)
I think I found a solution to one issue using hanging indent. Seems to have done the trick. I also used your recommendation of noinclude for the other issue. Thanks, Londonjackbooks (talk) 02:35, 10 July 2014 (UTC)
@Londonjackbooks: I would actually use {{hanging indent inherit}}, and holistically. It is an open formatting template, which can be closed as required. You may also consider using {{anchor+}} on each alpha anchor, and we can add a ToC template in the header when it transcludes to allow easier navigation. — billinghurst sDrewth 16:29, 10 July 2014 (UTC)
I might just make a mess of things, but I will try it when I have time. If anyone wants to have a go with it before I do, please feel free! Thanks, Londonjackbooks (talk) 22:02, 10 July 2014 (UTC)

Proposal for online Edit-a-thon event for Complete Works of Lenin[edit]

Is it possible to arrange an 2-3 days online Edit-a-thon event to create complete works of Lenin at English Wikisource? The source of all the works of Lenin is in this link. There are 45 volumes of work, too difficult and time consuming for a single editor. If some online Edit-a-thon event can be arranged, it will be great, as more and more will participate. -- Bodhisattwa (talk) 08:21, 11 July 2014 (UTC)

I can't see any scans in that link. Am I missing something? Beeswaxcandle (talk) 08:29, 11 July 2014 (UTC)
The 45-volume Collected Works was completed in 1977 and translated by the Institute of Marxism-Leninism. Since it was a consistent translation body, it's logical to assume that the translations were made around, and as late as, 1977—too short a time ago for any of the copyright to have elapsed even under a publication date plus fifty years copyright span. ResScholar (talk) 11:38, 11 July 2014 (UTC)
The works of Lenin are under public domain. It can be seen here. It clearly says, Public Domain: Lenin Internet Archive (2003). You may freely copy, distribute, display and perform this work; as well as make derivative and commercial works. Please credit “Marxists Internet Archive” as your source. -- Bodhisattwa (talk) 14:10, 11 July 2014 (UTC)
No need to shout. We have already discussed's unique copyright perspective at Wikisource:Possible_copyright_violations/Archives/ It turns out we didn't agree with it. ResScholar (talk) 19:41, 11 July 2014 (UTC)
Hey man, I was not shouting. I was just telling. Was it due to the bold font, you said, I was shouting? My fault. Ok, I have changed it to Italics. I think its Ok now. Really very sorry, if I unintentionally hurt you. No hard feelings. :-) -- Bodhisattwa (talk) 21:31, 11 July 2014 (UTC)
@Bodhisattwa: Don't sweat it. There are individuals around here so delicate and sensitive that a misplaced "^" cuts them. You are doing fine. AuFCL (talk) 22:04, 11 July 2014 (UTC)
OK, so there are no scans, and (assuming the English translations are indeed PD) as a result this would just be a big text dump without the ability to verify the text against publication. While I commend the idea behind this proposal, without scans to back up the works I can't support it. Beeswaxcandle (talk) 01:55, 12 July 2014 (UTC)
Apologies, ResidentScholar, I did not mean to delete your message. I did not realize I had.
[re-posted] @ Bodhisattwa, as Beeswaxcandle stated, "I don't see any scans" (either). But if there are scans then perhaps you can do the earliest volumes and let the others wait? Especially since it will take a long time to do the first ones. Wikisource or something like it may still be around. Kind regards, —Maury (talk) 20:27, 11 July 2014 (UTC) —Maury (talk) 02:40, 12 July 2014 (UTC)
Our purpose for edit-a-thons IMO should be new works, not already transcribed and supported with scans, rather than copy and pastes of existing transcribed works. Anyone is welcome to set up a wikiproject to coordinate, but I don't see the purpose for the community to put the effort into Lenin's works where they already exist. — billinghurst sDrewth
I agree with what billinghurst has stated because when I first read about the idea about Lenin as all text as far as I could see from the posted link, I thought that it would too much like Gutenberg's all text files. Just copy their all text files. Too, after doing so many books, especially illustrated books, all text of the length of 45 volumes on Lenin seemed to me terribly boring which is why I (and perhaps others) here don't go to and remain with Gutenberg. —Maury (talk) 05:19, 12 July 2014 (UTC)
Marxists Internet Archive: "The Former U.S.S.R. did not abide by copyright laws until 1973, so works published in the U.S.S.R. before that date are public domain." In the link I inserted above, WS:CV determined that statement was dubious at best and refused to acknowledge it. There's no need for everyone to flout the precedent simply because a humble admin like me was involved in setting it. ResScholar (talk) 09:12, 12 July 2014 (UTC)
As a pointer to @Bodhisattwa: we are not averse to the works being linked externally on Author: ns where they are freely accessible. So we could add the works to Author:Vladimir Lenin, though with so many, we may wish to create a template that clearly indicates that they are external links and to where. — billinghurst sDrewth 07:52, 12 July 2014 (UTC)

Tech News: 2014-29[edit]

07:48, 14 July 2014 (UTC)

Dotted TOC series of templates can be problematic[edit]

We have number of situations where the "dotted ToC" series of templates are blowing out over the limited for transcluded pages and turning up in Category:Pages_where_template_include_size_is_exceeded, usually the case of long ToC. I have resolved some, however, we need to look at that series of templates to see if we can lighten the load that they are putting in place. I am presuming that we have complicated nesting or conditions in these, though as they are not mine, and I haven't looked inside, I will leave it to who favour their use. — billinghurst sDrewth 09:56, 14 July 2014 (UTC)

When I initially created the original version of this template it didn't have sub-template calls and was written to generate the dots with simple CSS border-styles, IIRC partly because I anticipated that using characters for the dots might require recursion or some other resource-intensive legerdemain. I was stymied back then by cross-browser problems in getting some details correct and by the CSS standards and implementations not being sufficiently developed, but many of those issues may have been resolved now and with new things like border-image and border-image-repeat, which are (supposedly) supported cross-browser, it may be possible to accommodate whatever application the {{{symbol}}} parameter is for; so exploring a move back to CSS-generated dots might help with the resource consumption.—❨Ṩtruthious ℬandersnatch❩ 17:28, 25 July 2014 (UTC)

Template:Dotted TOC page listing/1-9 full of bloat[edit]

I have been trying to resolve issues with certain pages blowing out the post-expand include size, which basically means that not all pages of a work transcluded. I notice some particular issues with the Dotted TOC page... templates, and I narrowed it down to Special:PrefixIndex/Dotted TOC page listing/ template 1 through 9. These templates are about 2kb each, and when used multiple times on a ToC over multiple pages they are an issue. (we can have 2048 * 1024 = 2048000 bytes, and these pages have blown that limit). In short when we fill these templates with either &nbsp; or &#160; we get severe bloat. To put that into context, we have can fail on 5 or six transcluded ToC pages, yet, we can often transclude 100 normal pages in a chapter without issue. This is an issue of heavier coding in ToC due to tables, but also the really heavy burden of the current form of dotted leaders.

Examples of what was happening (and the figures are taken from the show source of a page) and here I have taken one page from The Army and Navy Hymnal/First Lines of Hymns

Page:The Army and Navy Hymnal.djvu/12 (with Template:Dotted TOC page listing/5 and all &nbsp; and dots)
Post‐expand include size: 410233/2048000 bytes
Template argument size: 10246/2048000 bytes
so at about five transcluded pages of ToC it fails

Page:The Army and Navy Hymnal.djvu/12 ((with Template:Dotted TOC page listing/5 without the spaces and dots)
Post‐expand include size: 109273/2048000 bytes
Template argument size: 6880/2048000 bytes
basically a little over a quarter of the size due to use of the dot leader effect

Page:The Army and Navy Hymnal.djvu/12 ((with Template:Dotted TOC page listing/5 instead with 2 &emsp; and dots)
Post‐expand include size: 235597/2048000 bytes
Template argument size: 10246/2048000 bytes
still less than perfect with just over half the size, so we may get 8-9 pages of ToC

So in short, I have resolved the current problem with a less ugly hack for Template:Dotted TOC page listing/5 of exchanging five &nbsp; (multiples of 30 characters) for two &emsp; (multiples of 12 characters), and for Template:Dotted TOC page listing/1 exchanged the &amp.#160; (6 characters) for a simple space (1 character). I am not certain that the spacing is correct, and would invite someone who cares about the look of the template series to have a look, and to address parts /2 /3 /4 /6 /7 /8 /9 and look to get the spacing consistent across the range. If we need half spacing, we are better of to use the combination of a normal space, and &ensp; and &emsp; and to try and minimise the number of characters used. Ultimately we need a programmed approach, and someone more skilled with lua, may have an idea that is less clunky. @Eliyak:billinghurst sDrewth 03:50, 15 July 2014 (UTC)

The one LUA solution I knew of was already tried and failed...
Ex call = {{#invoke:String|rep|{{{1}}}&#160;|244}}
...for basically the same reason - instead of achieving css letter-spacing (or even magicword-ish padding left/right somehow) we rely on one type of "space" character or characters or another to fill in between the "dots", which of course, causes x number of character bloat. The solution seems pretty simple either way if the LUA know how to feign css letter-spacing &/or white-space settings were somehow worked into all this. Getting LUA to accept simple "space-bar" spaces would even help. Long story short, I've tried to recruit folks from WB & WP to spend some time here for just this type of support with little interest shown (Maybe its me?). -- George Orwell III

Google Search and[edit]

Hello! We in the Russian Wikisource have found that Google had stoped to index new pages starting from May 2014. For example, I had created page about Lithuanian Statut in 9 July 2014, but this page is absent in Google search results now, 14 July 2014. It seems that the problem also applies to other sections of the Wikisource. It requires something to do with it, because without text indexing by search engines all the work is not available for outside users and we can forget about attracting new members.--Вантус (talk) 17:03, 14 July 2014 (UTC)

I don't see that. I find that, for a search it is showing me a page created on 30 June

The English Peasant/John Clare - Wikisource, the free ...
Jun 30, 2014 - 1668129The English Peasant — Types of English Agricultural Life — John Clareby Richard Heath. II. A Peasant Poet. (Golden Hours, 1873.).

billinghurst sDrewth 16:06, 15 July 2014 (UTC)

Very intretsing. You can see in Russian Wikisource my page Статут Великого княжества Литовского 1566 года and Google result. Moreover you can see search for last week and find only 4 pages when actualy more then 100 was created.--Вантус (talk) 20:48, 15 July 2014 (UTC)

Index:WALL STREET IN HISTORY.djvu upload[edit]

Index:WALL STREET IN HISTORY.djvu Why is it that this will not upload here to WS as it usually does? I used the same process to upload. File:WALL STREET IN HISTORY.djvu is located on WikiCommons. Thank you to whomever answers this mystery. —Maury (talk) 14:36, 15 July 2014 (UTC)

Just click here and create it!--Erasmo Barresi (talk) 15:13, 15 July 2014 (UTC)
I think that the implicit question was why wasn't the template self-filling with the meta data. I converted File:WALL STREET IN HISTORY.djvu to use the {{book}} template, and it imported the meta data fine, and Index: ns page is created. — billinghurst sDrewth 15:44, 15 July 2014 (UTC)
Thank you guys! May we all continue creating a wonderful Wikisource Library together as all people here should. I suppose a tl|book is a tough luck book? I have never had to use that when uploading books here. smiley —Maury (talk) 17:27, 15 July 2014 (UTC)
Template:Book has been recommended for books for a while now. It is also the default when you use Tpt's toollabs:ia-upload tool that takes Internet Archive works and adds them to Commons. Takes a moment longer to complete, though I think that it is or more value, especially as it inhales more data into the Index: pages. — billinghurst sDrewth 02:30, 16 July 2014 (UTC)
Since you like this method it is obvious to me it is the best way to handle uploading a new book. However, *I* do not understand it but I easily understand the previous method as well as removing a Google page (all Google watermarks) from a .PDF file, uploading it to IA to derive to a .djvu (if need be) without "Google" (as you asked another here to do with the 1st pg of a djvu) even mentioned and upload to commons then place the book here which I would guess is a long process but it is do-able. I have done several books the older way and very recently and I do hope the older option will be at least an option. I have more very good books by this same lady authoress to upload including the book on Early America Houses and architecture, and more. Now I find a blockade before me. The older method was best for me. I dislike learning "How to" over-and-over as another upgrade process. I back off from that and do not engage in more upgrading processes. I don't mean to sound like an asp, it is just a matter of a poor learning curve of an old guy. Please, if possible, leave the older way as an option.

I understand, this new method is better -- yes, for those who work with all sorts of scripts perhaps, but not for everyone. I recall what a fellow on WikiPedia told AdamMorgan about WikiSource -- it is convulated. Respectfully, —Maury (talk) 03:44, 16 July 2014 (UTC)

I have no idea why Template:Information didn't work, I simply converted the data to Template:Book and it did work. <shrug>

With regard to asking for no change, while that is your desire it isn't reasonable, and I don't think that there was any direct change to Tpt's metadata import. Developers are fixing lots of things at lots of times, and sometimes a change has a greater impact than expected. You can always populate the Index: template manually. — billinghurst sDrewth 05:07, 19 July 2014 (UTC)

That's okay, perhaps next time I can get it to work. It is an enlightening book and much more than I expected. Thank you kindly for the help and for the reply. —Maury (talk) 05:36, 19 July 2014 (UTC)

Annotation policy[edit]

I recently opened a Request for comment about the annotation policy. I would appreciate everyone's input.--Erasmo Barresi (talk) 20:39, 16 July 2014 (UTC)

Chronicling America[edit]

hi, i have been asked if there is interest in uploading some Chronicling America newspapers from Library of Congress. it has a microfiche view, and an ocr text layer, that allows text search; but wikisource would be better to read for phone or tablet. is there interest in some selective ones as a pilot? Slowking4Farmbrough's revenge 23:58, 18 July 2014 (UTC)

  • Huh that's kind of cool. I didn't know such a thing existed. (Though the site and its search functionality does have a strange concept of geography. It seems to think Salt Lake City is in Colorado.) Maybe a historically relevant edition as a trial run? Like something related to the Spanish-American War or World War I?
I'm going to go through this one and see how difficult and/or time consuming it would be to extract to a single djvu file. --Mukkakukaku (talk) 03:02, 19 July 2014 (UTC)
  • One thing I'd be concerned about would be the fact that stories that came across the news wire (eg via the Associated Press or the like) will likely have large sections of duplicated content. This is something I noticed while I was canvassing old newspaper articles for primary sources while writing regular wikipedia articles. --Mukkakukaku (talk) 03:42, 19 July 2014 (UTC)
  • So the PDFs I downloaded for my test newspaper were all corrupted so I couldn't convert them to DJVU and my pdf fixing mojo is apparently not good enough for this scenario. --Mukkakukaku (talk) 07:39, 19 July 2014 (UTC)
thanks very much i will mention this example at the w:Wikipedia:Meetup/DC/Chronicling America and with User:Taylordw. Slowking4Farmbrough's revenge 22:53, 21 July 2014 (UTC)
Fwiw.... in my expierences, using common PDF/TIFF/JPG/etc. methods for OCR &/or .DjVu conversion of a typical single newspaper "page" has always produced less than optimal results for me. The issue has to with the typical height and width dimensions of such scans - not to mention those instances where 3 or more columns appear on a single page to boot. Most of the current [free] services or software seem to be able to handle such nuances just fine but only if adjustments are made to the default settings of such entities (never easy). Rumor has it that this is possible even on IA - if you know how to properly manipulate/customize the derivative settings when you begin the derive process that is (and of course that linked table doesn't even list all the currently available options).

I've given up experimenting with this but someone like Nemo bis might know how to achieve this specifically for newspaper scans. It might be worth bringing him into the discussion before coming to any solid conclusions on our own. -- George Orwell III (talk) 23:30, 21 July 2014 (UTC)

  • Another comment: even the original OCR from the Chronicling America site was pretty pathetic for my test case and woefully incomplete, so it wasn't even a viable copy-paste job from the text layer available on the site itself. Maybe I just picked a bad test case and there are other newspaper editions that have better text layers. --Mukkakukaku (talk) 23:58, 24 July 2014 (UTC)

Index:Great Speeches of the War.djvu[edit]

I've done some proofreading on this, of the three remaining unproofread searches, I can't do these for copyright reasons. Perhaps someone US based can?. ShakespeareFan00 (talk) 13:45, 19 July 2014 (UTC)

Which speeches, exactly, are you concerned about? Mukkakukaku (talk) 17:18, 19 July 2014 (UTC)
Speech by Rt. Hon. Winston S. Churchill starting on p282 ( he died in the mid 50's so it's not out of UK copyright.)
Speech by M. Paul Hymans starting on p 258, ( They died in 1941) so nominally expired in 2011 , but wasn't sure origin country here (Belgium) applied an extension for those involved in the war.
Speech by M. Sazonoff starting on 263, I can't find a death date online, so can't be sure of the UK status.

All are pre 1923 publication as far as the US is concerned though.

ShakespeareFan00 (talk) 18:07, 19 July 2014 (UTC)

Sasnoff would appear to be w:Sergey Sazonov, (d. 1927) so this one is OK. :) ShakespeareFan00 (talk) 18:11, 19 July 2014 (UTC)

Tech News: 2014-30[edit]

07:41, 21 July 2014 (UTC)

Mainspace styling?[edit]

Did something change in the space of the past 5 days in terms of the styling of transcluded works? (That is, works transcluded using the <page> tag?) Or is this once again one of those weird side effects of using the MonoBook styling where things happen spontaneously because the rest of the world uses Vector?

This is what I observed:

  • Five days ago, when I was on WS last, the styling of those pages was specific: font was serif, text was a narrow stripe about 500px down the center of the screen, paragraph indicators were on the far left separated from the text by a big gap.
  • Today I show up again and the transcluded text is in sans-serif font, stretches almost the full width of the screen, and the paragraph indicators are in danger of being overlapped by the regular text.

Is anyone else seeing this or am I having a moment? -- Mukkakukaku (talk) 03:46, 25 July 2014 (UTC)

Check if you have selected Layout 1, 2 or 3 on the left Tool panel. The Wikisource cookie seems to store an initial selection. Delete the cookie, Log out and in and try again.--Ineuw (talk) 04:13, 25 July 2014 (UTC)
PS. Post the page you were looking at.--Ineuw (talk) 04:15, 25 July 2014 (UTC)
I am using monobook and not having issues of difference. Tried flushing your cache? (ctrl-F5) As Ineuw said, a page as an example would be helpful. — billinghurst sDrewth 07:37, 25 July 2014 (UTC)
It's happening on all pages, as far as I've noted. But if you want an example, Burwell v. Hobby Lobby, Inc.. I never knew the 'layout' thing existed, but it's set to Layout 1. Is that a new feature? I'm pretty clueless about most WS-specific things. I will try flushing cache and clearing the layout cookies when I get back from work. Mukkakukaku (talk) 12:46, 25 July 2014 (UTC)
I'm using Vector, and when it says "Layout 1", I see full-width text in a sans serif font. When I click that, it switches to Layout 2, and it's more like what Mukkakukaku describes above: narrow column of text in a serif font. WhatamIdoing (talk) 15:54, 25 July 2014 (UTC)
Layout has been used for numbers of years, so if the toggling through the layouts generates one that you saw is presumably what is the issue. Otherwise, all that I see that has changed related to the page is an edit on 22 July 2014 to Template:Indent/s‎ by George Orwell III, though doesn't seem likely to undertake the difference that you note. — billinghurst sDrewth 08:54, 26 July 2014 (UTC)
Well I am rather unobservant. :) I never noticed the 'Layout' options until now.
I toggled through all the layouts, then cleared my cache, shut down the browser, cleared temporary internet files, restarted the browser and finally now it looks ok. Clearly my browser has gremlins or something.
Thanks, all. Mukkakukaku (talk) 03:33, 27 July 2014 (UTC)

Editor Bar[edit]

1/2 of my editor bar, the beginning, is gone. I ask that someone please restore the 2nd half -- where "zoom in" and "zoom out" plus "expand page width" options for editing be restored. (Please). Thank you to whomever assists. —Maury (talk) 12:08, 25 July 2014 (UTC)

No issue for me (monobook). Reloaded your cache? Have you or others being editing any of your javascript (.js) files? — billinghurst sDrewth 08:46, 26 July 2014 (UTC)
Nothing out of whack here either. As for Maury's common.js file, to me, parts of it look outdated. I'm guessing it could use a review & rewrite by somebody who is actually fluent in javascript.

The other thing that comes to mind are your User: Preferences settings. The developers have made changes to the defaults as well as added/removed some options over the recent months without much testing actually being done on "smaller" projects like Wikisource so you might have a "conflict" coming from there (this week, Threshold for stub link formatting won't collapse into a drop-down menu like its suppose to for example). I'm willing to spend some time at least reviewing your User: preference settings if you are; can't say if it will help any. -- George Orwell III (talk) 05:02, 27 July 2014 (UTC)


Maury, I made an edit to that .js file by replacing the RegEx Menu Framework utility with it's recommended replacement, TemplateScript. See if that restores your built-in edit bar/tools. If that helped, the problem then becomes how to import all your old tools n' stuff into the new framework.

Either way, if it didn't help just undo my edit. -- George Orwell III (talk) 05:35, 27 July 2014 (UTC)

I see no need to undo your edit, George. It does no harm and I do not tinker with common.js because I don't know what I am doing. The scripts are over my head. BTW, Billinghurst asked if I were using Monotone as he was but I was using vector which presently has been to monotone and I see no difference in functions. It still works. Respectfully, —Maury (talk) 15:02, 27 July 2014 (UTC)
Parts of my common.js are outdated. I was told this by an administrator Mark>[User:Ineuw] here who seems to know what problems exist. I have no use for Hesperian's Script, I never have used it but it did not affect my editor. My editor still works except for what I asked for. I think it is best not to tamper with what does work because I have been using my editor as is with no problems for a few weeks. This is now added Abc = and I don't need it since I am in the long-time habit of doing this by hand.

, Ditto with
, {{hyphenated word start}}, {{hyphenated word end}}. Thank you for trying to add what I wanted back. —Maury (talk) 14:15, 27 July 2014 (UTC)

Actually you do use at least one of Hesperian's scripts when you invoke the clean-up one using alt-shift-x during proofreading or validating. At the time I put that one into your .js I wasn't sure which of the others you would eventually use, so I gave you the lot. Also my javascript is very rudimentary, so I'm not very sure what does what within the whole collection. Beeswaxcandle (talk) 05:43, 28 July 2014 (UTC)
@William Maury Morris II: I'm not clear on what exactly you are most comfortable with (or want?) at the end of the day. Is it the the old "blue button" toolbar or the newer toolbar? I just got done loading your .js contents in my .js file and - other than the Contrast adjustment thing - I don't see what "benefit" any of those settings/scripts/add-ons actually give you. I'm sure the "Hesperian" additions are useful - but they don't exactly seem to work (at least not with my system). And it seems like the newer toolbar (called WikiEditor) and the CharacterInsert tool can handle those bits & pieces if they don't already.

I guess the first thing to ask is which approach you'd rather be using the "old" or the "new" and then review your Preference settings....

NOTE.-- the "User prefrence checklist" originally posted here was later moved to Help:WikiEditor/Troubleshooting for future reference & refinement. -- George Orwell III (talk) 06:03, 28 July 2014 (UTC)
  • Thank you everyone who has sought to help. I have all I need at this point so please do not change anything. Respectfully, —Maury (talk) 05:53, 28 July 2014 (UTC)
Not to rub you the wrong way or anything but what was the point of raising your toolbar's issues in the first place if you really didn't plan on going through the motions in working towards a solution? I'm just curious to what changed, if anything, to make you go right back to the settings you had prior to opening this discussion?

Did the problem go away? Did the edits I(we) made to your settings make the situation worse? In what sense? Was the approach leading to a session of troubleshooting Just a put-off? Over your head maybe? I'm left a bit puzzled is all. -- George Orwell III (talk) 06:32, 28 July 2014 (UTC)

The New Yorker[edit]

Anyone want to use the access oppurtunity to grab pre 1923 material? 09:21, 26 July 2014 (UTC)

Oh well :( ShakespeareFan00 (talk) 15:23, 26 July 2014 (UTC)
Sadly, the first publication of The New Yorker was February 21, 1925. (I read it on w:The New Yorker so it's gotta be true.) --Mukkakukaku (talk) 17:24, 27 July 2014 (UTC)
renewals appear to begin for 1950 works [90], they consistently renewed, but maybe a trip to the reading room would be worth it, for pre-1978 renewals. Slowking4Farmbrough's revenge 22:37, 28 July 2014 (UTC)
So possibly all the works between 1925 and 1950 have expired copyrights? JeepdaySock (AKA, Jeepday) 10:38, 29 July 2014 (UTC)
I don't think so, I'm afraid. The first issue was renewed in 1952 (scan, see third column) along with what looks like every issue in the first year. They may have missed some but it's likely they were all renewed from there on. (FYI: The first copyright renewals for periodicals list is pretty handy for this sort of thing). - AdamBMorgan (talk) 11:42, 29 July 2014 (UTC)

In case you missed it....

The Wikipedia Library has new free signups available for American newspaper database, British genealogical database, philosophy and women's writers collections at Past Masters and several large collections from Adam Matthew. Medical editors can sign up for BMJ and Cochrane. Other accounts available for JSTOR, British Newspaper Archives, Credo, Questia, HighBeam, and Oxford University Press. Sign up! -- George Orwell III (talk) 15:56, 29 July 2014 (UTC)

Tech News: 2014-31[edit]

08:08, 28 July 2014 (UTC)

The Film Daily[edit]


With the help of McZusatz, I uploaded some issues of this US magazine: The Film Daily. One issue doesn't have a DJVU, which I could make manually if there is interest. Anyone interested? Yann (talk) 09:19, 28 July 2014 (UTC)

Does that really matter? You already went to the trouble of archiving a large swath, if not all, of the other issues - why take the chance you'll be flying with Malaysian Airlines one day and somebody who'd be interested in taking up the project shows up a day or two later? What then?

Seriously, the source .pdf is 250Mb to begin with. We should not expect my theoretical newcomer to somehow wrangle that into a .DjVu on their own, navigate Commons for 100Mb plus uploads successfully and then still be left with the task of compiling a ~2000 page pagelist here on WS, should we? Please, if you have the time, complete the series regardless of any interest (or lack thereof) being shown at any given moment. -- George Orwell III (talk) 23:07, 29 July 2014 (UTC)

Request for un-deletion of pages[edit]

Re: Index:Book of record of the time capsule of cupaloy (New York World's fair, 1939).djvu

Hello, I recently became interested in transcribing the Book of Record of the first w:Westinghouse Time Capsule which, through non-renewal, has fallen into the public domain. It seems User:Cygnis insignis had started a little bit of work on it, but then deleted it without explanation. I already re-created Index:Book of record of the time capsule of cupaloy (New York World's fair, 1939).djvu and Page:Book of record of the time capsule of cupaloy (New York World's fair, 1939).djvu/9, but I don't want to re-create any more deleted pages, and I certainly don't want to do so if there is a legitimate reason why this shouldn't be on Wikisource.

If there is no reason not to include this text, on the other hand, it would be great if someone could resurrect these pages so that I could build off of them. Phillipedison1891 (talk) 20:43, 29 July 2014 (UTC)

I can find the original 1938 copyright registration but not any clue that there was a renewal...
  • Pendray, G. Edward. Book of record of the Time capsule of cupaloy deemed capable of resisting the effects of time for five thousand years, preserving an account of universal achievements embedded In the grounds of the New York World's fair, 1939. © Sept. 23, 1938; 2 c. and aff. Oct. 8; A 121912; Westinghouse electric & manufacturing coEast Pittsburg.  13026
... anyone verify that? If so, undeletion should not be an issue. -- George Orwell III (talk) 22:18, 29 July 2014 (UTC)
Check that. After inspecting CI's edit summary prior to deletion, it appears the source file is missing pages (i.e. incomplete) -- George Orwell III (talk) 22:18, 29 July 2014 (UTC)
Not from my end. The main part of the book contains numbered pages 5 through 51, and there are 46 pages of content there. Also, I read the whole thing, so unless I'm missing something staring me in the face... Phillipedison1891 (talk) 22:35, 29 July 2014 (UTC)
Grrrr... check that as well but you beat me to it.

On a hunch, I jumped through the DjVu on IA and I didn't find any missing pages (or images) in the file there either. So we're back to somebody else verifying no copyright renewal. -- George Orwell III (talk) 22:39, 29 July 2014 (UTC)

If my understanding of US copyright is correct, a 1938 publication would require renewal in 1965 or 1966. I used the resource at [103] to check all 4 relevant catalog sections (one for each half of each year), did a text search for "cupaloy" and "time capsule" and came up with nothing. Phillipedison1891 (talk) 22:50, 29 July 2014 (UTC)
If I'm not mistaken, the best resource for this is the Stanford Copyright Renewals Database. Hesperian 00:50, 30 July 2014 (UTC)
It isn't listed under an author surname search. @Phillipedison1891:. I will add a comment to the file at commons to cover that aspect. — billinghurst sDrewth 08:11, 30 July 2014 (UTC)
@Phillipedison1891: That is exactly the kind of vetting we're hoping for. Thanks. But now something else seems to be a potential issue.

I started restoring the previously deleted pages and began to notice pages marked as 'Proofread' that had entire sections or paragraphs missing so I stoppped to check back with you before going any further. Is it easier to manually insert/correct these [apparently consistent] drop-outs or is it better to just discard sloppy work and start pages from scratch instead? -- George Orwell III (talk) 23:22, 29 July 2014 (UTC)

My mind must be playing tricks on me - the content seems to be all there but not up to par with the given PR status. I'll keep restoring but one should disregard the existing PR status'. -- George Orwell III (talk) 23:25, 29 July 2014 (UTC)